Showing posts with label microprocessor. Show all posts
Showing posts with label microprocessor. Show all posts

Wednesday, January 15, 2014

Commercial Spaceflight Isn't About Sending Fatcats to Space

I've seen a fair bit of "hate the rich" sentiment toward various "space tourism" ventures out there, such as Virgin Galactic's White Knight 2, XCOR's Lynx, and other development efforts toward reusable suborbital spaceflight. I feel this is driven by the usual specious reports by the media about space tourism and seat costs in the range of hundreds of thousands of dollars to drive the class warfare point home. Naturally, the first image that will come to someone's mind when seeing these reports is some overstuffed fat cat buying themselves a $200,000 joy ride at the expense of others.

The fact is, a point has been missed here. Space tourism is the economic model that's being used to draw investment for this work, but it's not the reality of where this work is going. In order to get money to do something new, without a proven business model, you have to build a business model from scratch. This means finding something that will presumably pay the bills and earn enough to pay investors a return at least as good as an investment in another proven economic activity, such as investing in a restaurant or manufacturing. In fact, the return needs to be better, at least on paper, to induce investors to take the risk of not putting their money into something proven.

For reusable suborbital launches, the case was made using space tourism. That's because none of the other potential uses is really well known, but a unique luxury offering could be pretty well characterized, and counted on to deliver a return.

But the real value in these quick trips to space lies elsewhere than in joy rides for someone with $200,000 burning a hole in their pocket.

It's Like a Reusable Mercury-Redstone for Suborbital Research

The Next Generation Suborbital Researchers' Conference is one group that's excited about the prospect of cheap suborbital flights. Currently, the overall cost of a sub-orbital flight on a sounding rocket is about $3.5M. The costs to the users are less, because much of that costs is subsidized by NASA, and by sharing of launches between institutions. It costs each institution about $50,000 to $500,000 per launch with somewhere from half a dozen to a dozen institutions sharing the costs between themselves. This gives them the chance to launch about a half a cubic foot to a cubic foot of payload on a short flight on a rocket. It will experience at least 25Gs of acceleration, with shocks of double that or more. Nobody can ride along, so the success of the flight will depend on the researchers' ability to automate their payload (adding considerably to the costs of building and testing the payload before flight.)

Flight on one of the new commercial launchers will cost the institutions about $50,000 to $400,000 per launch. They get to send a person to operate the experiment, and likewise to experience the ride, if they choose. They may also buy a package where they send their equipment, which will then be operated by a space tourist paying their own way who has been trained to operate the experiment, or the package may be activated by the spacecraft's crew.

The commercial space operators are already putting together special deals for the space researchers. They can buy into multiple flights at discounted rates.

Plus, their payloads can experience flight at human comfort levels, 3Gs or less, with controlled temperature, air, etc. This results in far less cost. The same instrument package used on the desktop at the university's lab can be the same one sent on the flight. It doesn't have to go through vacuum testing, extensive testing of the automation under different conditions, hardened against high shock loads, etc. Standard safety design and testing for not bursting into flames and filling the cabin with smoke will still be necessary, but that's a big step down in cost and effort from what's required for a sounding rocket flight. That drops research costs even more.

Another important point is that these flights will be far more available than sounding rocket flights. NASA launches somewhere around a dozen sounding rocket flights each year. The commercial flights will be more frequent, and easier for an institution to get a payload on board, deal with schedule changes, and so on.

Teachers in Space
Another purpose of the new commercial "tourism" flights is to send the sort of tourists I think most of us would want to send. Teachers in Space is a program that can't wait to use commercial spaceflight to send teachers into space with student research at all levels of education. Speaking as a part-time teacher myself, I can say that it helps the students a lot to hear about science and spaceflight from someone who's actually been involved in it. If we have a growing cadre of science teachers who can start a statement to students with, "When I flew in space...", work alongside them on projects they're building that will actually go into space, it will bring a sense of reality and engagement to their education that's so hard to get otherwise.

Other Desirable Space Tourists

There are plenty of other people we, as a society, would like to see get a chance to experience space travel. Make A Wish Foundation flights? Rewards for science fairs? Small companies doing their own research to compete against larger ones? There are many, many uses for these vehicles that have nothing to do with the ultra-rich burning off spare dollars.

Opening Up Space

That just happens to be the easiest way to show that there's a potential profit at the end of the long development process for those who invest in the companies making this happen. So don't be fooled by reports making the commercial space industry out to be nothing more than a new form of luxury for plutocrats. This is about giving little people the access to space that's so far been limited to governments and richer institutions. This is the same sort of revolution that we got with the microprocessor, which brought computers into our homes then into our pockets. Once upon a time, the computer was known to the average person as a tool of oppression. When your bank or government told you that their computer said you owed them so much money, you were stuck fighting a battle against the authority of a tool you didn't have. When we got our own computers, we got the power to tell them back, "Well, my computer says..."

Now, we're on the verge of having space access be democratized in the same fashion. Virgin Galactic, XCOR, and Blue Origin are not the end of this particular road, any more than the first heavy, balky, difficult to build and use microcomputers from before 1977 were the end of the process of democratizing the computer. But if early public sentiment had risen to kill off the early small computers as nothing more than toys for the rich, where would your tablet and cell phone come from today?

Be glad the rich are there, willing to buy tickets for a space adventure. Because they're there, the way is being opened for your kids and their teachers, their work and research.

In the words of Alan Stern, "The access revolution is about to happen. When these guys are flying all the time, and you can fly an experiment over and over and over and get different data sets all the time, close the loop and fly an experiment the next week and the week after, I think we're going to see the best applications be things we haven't thought of yet, because we're kind of looking at it through old eyes." (Aviation Week, June 17, 2013, "Suborbital, But Reusable" reporting on the 2013 NGSRC.)

Tuesday, November 26, 2013

Why Electronics Took Over the World


How did we end up in a world where computers are everywhere?

Originally, we had vacuum tubes as electronic components. Each of these has to be hand-made. When you consider that even the most basic computer, about the power of a programmable calculator, requires about 4000 electronic switches in it (including some basic control, memory, and interface circuits), you can see that needing 4000 hand made parts is going to get expensive. And that's before you wire them together into a working computer. It's like having to hire a team of scribes every time you want to get a new book.

Each of those tubes is like a decorated capital drawn by a scribe.

Transistors were a big step forward. Transistors aren't made one at a time by hand. Packaging them involved some hand work back when they were new, but the guts of them were produced en masse. Making transistors was like printing a sheet covered in letter "B" so that you could cut them up to have a letter B to stick wherever you need one. Similarly, transistors are made in a large group, which is then cut up into individual transistors then packaged for use.

So why not print the equivalent of a small piece of often-used text, rather than cutting it up into individual letters? This is the basis of the integrated circuit. It was another step forward in reducing the cost of electronics manufacturing. The first circuits were like having commonly used words, in complexity. Over time, technology advanced to allow more and more sophisticated circuits.


Eventually the circuits got more and more complex, and more useful. Building a computer got to be about as complex as creating a book on a typewriter. That means it took patience, and skill, and it was still expensive, but not nearly as expensive as hiring a team of scribes.

Each integrated circuit has from a few to as many as a few hundred transistors on it at this point. Building a basic computer circuit could be accomplished with a couple of hundred ICs.

The next step was a big one. Integrating the entire guts of a computer onto a single die, then printing them not one at a time, but by the tens then the hundreds at a time.

In the mid 1970s enough transistors were printed together, in the right circuits, to make a basic computer. When added to some memory (which was another technology that had recently benefited from the improvements in integrated circuits), a few ICs for control and for interfacing to the outside world, a complete computer could be built out of a handful of integrated circuits. Like my MAG-85 computer project, which uses about 10 ICs to build a basic 70's style computer.

But that wasn't enough. It was enough for calculators and very simple computers that require someone with a high level of skills to get the most out of them. If we'd stopped there, only very technical or very driven people would have computers. We had to increase their complexity to make them more capable, and easier to use.

Since then, we've improved our "printing processes" to allow us to produce integrated circuits that contain not just a few thousand "switches", but billions. Your computer, cell phone, or tablet contains the equivalent of billions of vacuum tubes. And yet, those billions of sub-microscopic electronic switches all together require less electrical power to operate than one single vacuum tube. They also generate less heat.

If we put the entire world population to work building electron tubes as fast as they can, we couldn't produce enough tubes to reproduce the computing power of a single cell phone in a year. In part because we couldn't build tubes that can switch as quickly as the transistors in a cell phone.
the guts of a simple vacuum tube
Imagine building a few billion of these, by hand. Image courtesy RJB1.
But the computer in the heart of that cell phone is one chip that was printed alongside hundreds of others just like it in a mass production process that's very similar to printing. Many of today's computer chips literally cost less to make than a printed magazine or book. Far less, usually.

This triumph of manufacturing, reducing electronics to a simple, inexpensive, high volume printing process, is why we have computers everywhere from our cell phones to our irons and dishwashers. They're cheaper to build than the parts they replace.

Have a look at a current computer chip sometime. Inside it are several billion man-made structures. You could look at them with a microscope if the top were removed, but you would only see patterns, not individual elements. The individual elements are too small to see in visible light now.
There are several billion man-made things in this image.

Wednesday, November 13, 2013

Lee Felsenstein at Homebrew Computer Club Reunion

Lee was the MC at the main part of the club meetings back in the day, and he reprised that role on the night of the HCC reunion. He was also the designer of the computer in that day that I most desired, the Sol 20 Computer. I loved that system--the look, the keyboard, its operation.

Image by cellanr
There were just two things you wanted to know about the Sol to make life happier: Build the fully expanded system right at the outset. Opening up the heart of the system to expand it later was a major PITA. The other? Use someone else's disk subsystem. Though with the information available today a Helios disk subsystem could probably be made to work.

I still have the sales brochures for the Sol 20. I pull them out every now and then to drool over them again. Part of it is nostalgia, but part of it is the great design itself. Actual Sol 20s sell for more than I can afford, but perhaps I'll build myself a look-alike system from sheet metal and walnut wood sometime, anyway, and print up a nice black name badge.

I still have an Osborne 1 computer. This one is one I got only relatively recently. It is pretty well maxed out on upgrades (disk upgrades, video upgrades, etc.) and is a pleasure to use. It's not as pretty as a Sol, but I enjoy showing it off in current day computer classes. The kids love it--especially the floppy disk drives and the tiny screen. But...they get hooked on Zork.

Lee Felsenstein Today

In our conversation last Monday, Lee showed me a project he's working on today as an educational tool. It's a programmable logic simulator, targeted at middle school students. What Lee showed me was a pair of printed circuit boards that have captive fasteners to clamp them together around a plastic matrix. The matrix holds surface mount diodes, which the students can place into the matrix to program it. In essence, it's a 16 by 8 programmable logic array that is programmed through physically locating the diodes.

OK, I know that sounds totally abstruse to many of you, so let me tell you what makes this a great idea, and why your middle schooler ought to know about this stuff even if you've gotten through life without having to so far (assuming you don't know already).

The core of computers are built out of logic circuits. The memories feed the logic circuits with data (in current designs--it doesn't have to be that way though it's presently the assumption), in essence, the programmable logic is the complement to the memory. This analogy of the logic and memories being complementary components of a computer holds on many levels. It's possible to build logic out of memories--I've done it--but it's not efficient.

Initial education in logic circuits can be accomplished with a simple breadboard and some logic chips. A few AND chips, OR chips, NAND chips, inverters, and so on. Add some resistors and LEDs and the kids are off and running. For a little while. Once they master this, and understand what's going on, they immediately start expanding their ideas.

Then a problem hits. More chips and more wiring between them mean more complexity, and more difficulty in realizing their ideas.

At this point, it's possible to introduce them to programmable logic devices. Teach them that the logic functions they had in the ICs live inside the PLDs, and that they can program the devices rather than run wires. The problem is that this is a big, big jump up in abstraction level, especially for a kid in the middle school age bracket (which is the perfect age to introduce this stuff, which I'll go into later.)

Whereas Lee's invention maintains a physical element. The programming is accomplished by manually placing diodes into a matrix, rather than typing characters on a screen then punching the 'program' button to dump it to a Flash PLD. This keeps it from getting too abstract, encourages experimentation, and maintains the hand-on element that's necessary for students in the 9-13 years age range.

Building Blocks of Electronics

Electronic logic is building blocks. Your kids play with building blocks, right? They start with simple structures to learn how to build more complex structures. Before long, they can use every single piece they've got building large, complex structures. Once the individual blocks and a few simple ways of interconnecting them are understood, they can take off and make great big projects that reach to the ceiling.

It's the same with electronic logic. It's a collection of simple building blocks. The problem is, the complexity of assembly is a little greater. Enough that once you get past a certain level (I'd say 20-30 ICs), it gets progressively more difficult to implement your ideas. The ideas out-race the ability to construct.

This shouldn't be an obstacle. The ideas should be allowed to continue to grow, without removing the physical aspects that make the activity interesting.

The Lee Felsenstein Magic

Lee has hit a sweet spot here. With all the excitement about the Raspberry Pi (which I will save my criticisms of as an educational tool for a future article), Lee's project should have that sort of excitement going for it. This is about students building their own processor. This knowledge is important. This is what the people who caused the microprocessor revolution used to cause the revolution in our lives. This is the knowledge that put a CPU in your telephone, your oven, and your iron. This is what tunes your radio.

Assembling a processor from random logic is a huge project. Yes, people still do that (I've even build a very, very simple one from racks of relays, myself, under cover of testing those relay racks and their support wiring after installation.) Building your own processor with a PLD is a lot easier, once you understand the building blocks.

Lee explains himself well on his project page. Have a look. I will be following the progress of the project.

And I'm really glad I got a chance to meet up with Lee again after all these years. He was one of my mentors and inspirations in my youth, just as he describes those who mentored him. It seems to be a common thread that those of us getting older want to assist the younger generation just as we were assisted when getting started in technical pursuits (as hobbies--the jobs came later.)

And if you're raising a kid--don't just foist off software on them as something to play and "learn" with. Software isn't reality. I've designed any number of computers on paper and in software, and then go on to build far fewer of them. Because software and paper aren't the real thing. The real thing has all sorts of little niggles and oddities that you'll never learn about in any way other than doing the real thing. Teach your kids to solder, use solderless breadboards, and use real components at all levels of complexity. Don't try to do too much at once, start with kits then move your way toward recreating circuits on breadboards then to soldering them on prototyping boards.

But do the real thing. Right alongside your other crafts projects. Because electronics is just as much a craft with some useful products as is crochet or embroidery (both of which I do) or quilt-making or sewing (which some of those close to me do). And most of all, have fun!

Tuesday, November 12, 2013

Ted Nelson & Computer Lib at Homebrew Computer Club Reunion

I had a great time at the Homebrew Computer Club Reunion last night, which, I learned, was made possible by a Kickstarter (thank you, KS backers!)

One of the great conversations I had there was with Ted Nelson, author of Computer Lib/Dream Machines and his wife, Marlene Mellicoat. My wife and I had a wonderful time speaking with them. Ted has published a new edition of Computer Lib. It's not a reprint from scans of the original, but a new printing from the original negatives. It's as clear and sharp as the original was back when, possibly even better. It's in the same large format, as well, not scaled down for the size of paper that happens to be cheap and convenient for most books.

Sorry about the fold, I only had my hip pocket as a place to put his flier last night.
I was working so hard at being social last night it didn't even occur to me that I could probably have purchased a copy directly from him right then. I saw that he had a number of copies in his bag, too. It's little things like this that I always think of when people tell me how smart I am. Yeah, about some things, maybe, but about other things I'm not so much.

Nevertheless, I'm going to purchase it now, after the event. I read someone else's copy back when, having noticed it as a pillar on a bricks and boards bookshelf among a number of copies of The Fabulous Furry Freak Brothers (Fat Freddy's Cat was my favorite of the crew.) Now I'm looking forward to having a Computer Lib/Dream Machines book of my own.

If you're not familiar with Ted's work, I strongly recommend correcting that. The web could be so much more than it is, and require far less human "curation" than it does, if it hadn't turned into the mishmash mess of information without proper structure that it has become. I'd say more, but rather than reading my take on what he thinks, go to the source:





Hopefully I'll have a chance to post more about last night's event in future articles. There was so much packed into so little time that my head is still spinning from it. (They managed to recreate the atmosphere of the original meetings perfectly in that regard.)

It was great that my wife got to hear Ted speak during the formal presentation portion of the evening, too. I got to hear him speak a few times back when, he's a dynamic and engaging speaker. He makes you think about how things could be, possibilities that are better than reality. Now we have hearing Ted speak as a shared experience.

Tuesday, November 5, 2013

Hidden Ogres Spotted?

I was a backer in the Ogre Designer's Edition Kickstarter. Now that they're shipping out the rewards, I've been watching the updates closely. Today, they posted images that they claim had "stealth ogres" in them. At first I was skeptical.

Then I pulled out an old, experimental pair of multi-spectral image enhancement goggles that we were playing around with in the lab back in the late cold war boom of the 80s. You know, when we actually invented everything that Popular Science raves about being the "latest thing" now (except for the stuff that was invented in the 50s and 60s, of course).

There they were! At least two Ogres, possibly more. They have very sophisticated stealth capabilities. Have a look:

At least one Ogre (a Mark V?) concealed in a nearby pattern of shadows.


Two or more Ogres concealed in plain sight, possibly including the dreaded Ogre Mark VI!

Wednesday, August 15, 2012

Parallax Propeller + COSMAC 1802: the Saga Continues...

Monday morning I posted about hooking up a Parallax Propeller QuickStart Board to an 1802 Microprocessor. My hopes are to make the Propeller behave as a RAM for the 1802, primarily to be able to display a bitmap of that segment of memory as video. The idea is to make it compatible software-wise with the 1861 Pixie chip (the 1802's native video adapter), which is presently unavailable.

Since then I've spent more time on this project. A lot of that time got tied up in side issues related to tying a Propeller to an 1802, as well as some time spent dealing with the test equipment I want to use to keep an eye on the circuit and see what it's up to.

First, here's how it looks tonight:

Propeller QuickStart hooked up to an old CDP1802 microprocessor.
The second stage of the Propeller/1802 mashup circuit. Click for full size.
I've got a closer relationship between the two processors now. The Propeller is actually doing something. Before, the 1802 was getting power and a fixed clock signal off the QuickStart board, but the Propeller was not itself involved.

Clocking the 1802
One thing I put a fair bit of time into (too much, that is) is using the Propeller to control the clock of the 1802. There are a few reasons I think this would be nice:

The clock can be varied in software depending on the user's desire and the individual 1802's capabilities.
At some later point, the Prop can have a greater degree of control over the 1802's operation in general. With the ability to stop it, start it, and even single-step it.

On my first stab at this a couple of days ago, I just threw a repeat loop into the Prop's code to pulse an output line. I didn't have very good control of the rate, I didn't have the constants set right, and it didn't work right off the bat so I decided to drop it and just use another clock.

Let's Try a Timer, Because...That's What They're For
This time I took time to read up on using the counter/timers on the Prop. Each cog has one, and I played around with it for a while before feeding its output to the 1802's clock input. After a while I was able to get the results I wanted, and was able to fine-tune that bit of the code.

Then I hooked it up to the 1802 with a frequency of 1.25MHz...well, I should say I hooked it up to a 4049, passed it through a pair of inverters to square it up a bit and buffer it. Then I passed the signal to the 1802. Which ran just fine.

At this point I still had the data bus of the 1802 hard wired to a $C4, which is a NOP instruction for the 1802. That way I can watch LEDs on the address bus and see that the chip is running through its address space. The pulse rate of the high order LED gives me an immediate idea of how fast the 1802 is getting clocked, too.

Then I started playing with the constant for the counter's frequency to see what speeds this 1802 chip would be good at with a 5V Vdd and 3.3V Vcc (the 1802 can use split supply voltages for its core and I/O. Not bad for 1976, eh?)

Overclocking! 4.8MHz Baby! Yeah, uh, Megahertz.
This one ran up to 3.6MHz without a problem. I didn't run it up until it stopped, the other day I think I had it running at about 4MHz when I was playing with the repeat loop. Above 3.6MHz it seemed to start running hot, so I stepped it back down again.

Above 3.6MHz it was noticeably warm after a while, and seemed to be getting warmer. Between 3.2 and 3.6MHz it was warm, but maintaining its temperature just fine. At 3.2MHz it was solid as a rock and the heat was barely detectable to a calibrated fingertip. If I can get it to run at this speed reliably with the Propeller providing video then popular Elf software like Tiny Basic, Forth, CHIP-8, and the programs written in them are going to rock on this system.

Edit: I've run it up to 4.807MHz now, and it didn't seem all that hot this time. It really shouldn't have any problem with heat at these voltages, it normally runs at 5MHz at higher voltages. And now it doesn't seem to have a problem. When I tried to take it over 4.8MHz it ran sometimes but not others, or hung occasionally. So this is the limit for stable operation at this voltage for this chip (an RCA CDP1802CE.)

Getting Data to the 1802

OK, so back to the problem of making the Propeller look like a RAM. Frankly, when I did my first look at the timing involved, I figured this would be completely trivial to implement. Push in data and address bus wires, connect up /MRD (memory read signal) from the 1802 to the Prop, crank a little Spin code, and I'd have the Prop acting like a ROM.

Put in /MWR and a little more code, BAM, the Prop is a RAM.

Hook up TPA to catch the high order address byte, tweak the code, and the Prop is ready to map more than 256 bytes. All I'd have to do is add the video code then figure out how much Prop RAM was available then expand to fit.

Well, it wasn't that easy. If it had been, I'd be playing with the video now.

I tried to leap forward at first. I plugged in the address and data bus wires, plugged in /MRD, and added some code to my clock driver program. It had a 256 byte array that it set up and initialized to $C4 in every position (NOP again, but this time it would be a soft NOP rather than a hard-wired one.) Then I wrote the program to wait for the Propeller I/O line I'd selected for /MRD to go low. Once that happened, it would get the address off the address bus, look up the appropriate element in the RAM array in memory, and put it out on the data bus until /MRD came back up again.

Easy-peasy, right?

Well, it didn't work.

Oscilloscope signal trace of one of the data lines and /MRD on the Propeller/1802 mashup.
Top: Data from Propellor pretending to be a memory. Bottom: Memory Read Request. Result: Memory Too Slow! (Click for full size.)

When I was doing my timing calcs and looking at the 1802's leisurely timing--it doesn't mind waiting just under a millisecond for its memory to come back when it's running at its normal speed (about 1.78MHz.) Hey, an 80MHz processor could be running a version of interpreted BASIC written in LISP and keep up with that, right?

I guess not. At least not the way I did it.

I clocked the 1802 at 1.25MHz for the test. This is an easy, slow speed that the Propeller can generate easily. I figured it'd work, I'd push up the frequency to 2.5MHz, check timings on the scope, and discover that I couldn't make the 1802 go too fast for the Prop.

Unfortunately the trace above shows different.

I Can Do Lots of Things Wrong, All at Once
I'm sure this is the result of something I'm doing improperly. I can think of several possibilities already:

I'm using WAITPNE and WAITPEQ to respond to the pin change. Perhaps, because they do some power-saving activity on the cog on which they're invoked, they just aren't meant to respond this quickly. Maybe if I just go to an active polling loop I'll be OK?

I'm using SPIN, an interpreted language. Perhaps I need to just insert a bit of Propeller assembly to tighten up the timing?

Perhaps there's some option or configuration thing I've not done?

Maybe this cog has to do something active with the timer/counter (I'm using the same cog as the one that's running the 1802's clock.) Perhaps if I move the RAM functions to another cog I'll be fine?

Those are just what comes off the top of my head. I'm far from being an expert on the Propeller, and I haven't picked up the books on it yet, just the free manual and datasheet downloads. But things like this create an opportunity to learn.

I also had a few other things I needed to figure out on the way, minor little things like the order you mention the output pins in when writing your code statements (I was getting $23 out instead of $C4 initially. Yeah, oops.) That and the whole thing is a rather fragile lash-up right now. I'm going to go to solder as soon as I get the RAM moving data to and from the 1802. But not before, because I hate rework, and if I don't test it on a breadboard first, there'll be rework. And this lets me do a few things like change which lines I'm using for address and data easily. I moved the data lines to P16..23 so that I can watch the data on the QuickStart board's LEDs, like a little front panel. Before, I had the address lines here, but I've already got LEDs on the breadboard showing me the address line states. So a quick change of a few constants in software, and I get a data display with no re-soldering.

Slightly Off-Task Tasks
I also took some time out to search out a line cord for the new frequency counter I got at the Ham Radio Club night before last that didn't have one (found one, after searching high and low. It's an oddball, not an HP cord.) And I checked out the two portable NonLinear Systems oscilloscopes I bought, too (one works, one doesn't, as expected.) The frequency counter was very nice to have while I was figuring out how to use the Prop's counter/timer as a numerically controlled oscillator, so the time was well spent even if it did eat into my Propeller-as-RAM time.

Looking Forward to Round 3
I'll be back at this before the end of the week (I'd like to tomorrow, but it'll be a busy day so I may not be able to.) If you have any suggestions or salient experience, your comments or emails would be much appreciated!

Edit:

I've had a look at the Propeller docs. It looks like the timing of hub instructions is my problem. I may just insert WAIT states for the 1802 and see what that does.

Monday, August 13, 2012

Parallax Propeller + Cosmac Elf = ?

I've started working on one implementation of an idea I've had for a while...

There's this neat old 70's computer system called the COSMAC Elf. It's like a lot of the microprocessor trainer systems of the time, but it's got some unique abilities that make it a bit more interesting to build, and expand on, than some of the others.

Video output being one of the biggies.

Step One
Before I go further, here's a look at an early step in implementing my idea:
An 1802 CPU on a breadboard connected to a Propeller Quickstart board for power and clock.
A first step: Power and Clock from the QS Board to the 1802. (Click for bigger image.)

Here I've gotten to the point of getting clock and two levels of power from the Quickstart board to the 1802 CPU. First, I had a crystal oscillator circuit providing a clock to the 1802. Only 5V power came from the Vin on the Quickstart board.

Then, I split the voltages. The crystal oscillator, the inverter for the clock signal, and Vdd for the 1802 were split off with 5V power, and the rest of the circuit was put on the 3.3V power from the Quickstart board. At this point, I'd been running the 1802 at 1MHz, slow enough I could easily watch the LEDs on the address lines changing as it ran.

Then I moved up to a 2MHz oscillator. The 1802 was still good with that, with its Vdd at 5V and Vcc at 3.3V.

Then I tried to get fancy.

I took an output off the Prop and tried to use a repeat-wait structure to clock the 1802. It worked, up to a point. But I got to where I was unsure of my actual frequency, and the 1802 stopped running at a slower speed than I expected (I thought.) In fact, I was getting too clever and messing myself up. Looking back, I was probably somewhere above 4MHz when the 1802 refused to run any more!

After a while I realized that, and just decided to put the crystal oscillator back in.

Then I took another look, and decided that a 5MHz XI off of the QS board could be used as a clock base. A 2.5MHz clock would be fine (actually, anything from 1.76MHz on up would be fine.) Most Elf computers run somewhere around 1.76 to 1.79MHz to accommodate the clock for the video IC they use. Getting at least that speed is pretty much a must for me to feel like this project is going where I want it to. But getting a faster clock would be even better, as we'll see.

First I dropped in a CMOS part--a 4013--to act as a divider for the 5MHz clock to drop it to 2.5MHz for the 1802. I forgot that at 5V the 4013 only really works up to 4MHz on its input. So that turned out to be a waste of time.

Then I dug out a small supply of 74AC74 ICs, which work fine at well 5MHz and above. It worked fine, dividing the 5MHz down to 2.5MHz. In fact, just to be a little conservative, I used both flip flops to divide the clock down to 1.25MHz, then ran that to the inverter I'd had the crystal oscillator going to.

That worked, so I tried the 2.5MHz output. At first the 1802 wouldn't run, I pushed the Reset switch, and noticed the clock took off when I bumped the Ground wire. Once the ground wire was back in its place securely, the 1802 ran just fine at 2.5MHz.

Then I bypassed the 4049 inverter, the signal from the 74AC74 is plenty strong enough to drive the 1802 by itself.

That was step one. Time for a break before step two.

The Plan
So why hook up an old CPU to a fast modern CPU like the Propeller?

Because of a problem with getting chips for the old Elf computer.

People still like building the old Elf computer. It's a complete computer system that can be built in an afternoon if you've got all the parts and tools at hand (I built my first in about four hours.) It's a computer that pretty well exposes all of its parts to examination, so it's easy to learn how it works, and to understand all the bits of the system.

The video IC, called the CDP1861 Pixie chip, is one of the simplest video ICs ever made. It's basically some timing and control signals wrapped around a shift register that works with the DMA mode of the 1802 to produce a really nifty one chip video interface.

It's not exactly workstation graphics, being monochrome with resolutions ranging from 64 x 32 to 64 x 128. But it does the job. People program the system using this quality of video. And you thought the Vic-20 was low-res!

Well, the problem is that in the past few years Pixie chips have pretty well become Unobtanium (a term that goes way back before the movie Avatar, by the way.) In other words, you can't get 'em. There have been a couple of less than optimal replacements (from the perspective of new ELf builders who have to make them up themselves rather than just buy one pre-made.)

I'm trying to come up with something slightly less suboptimal. And solve a problem that the Pixie chip has.


The Third Cycle...of DOOM

The Pixie's problem is that its timing only deals well with 1802 programs that use instructions that take two instruction cycles or less to complete. There are two instructions, Long Branch and Long Skip, that take 3 cycles. They create jitter in the video by throwing its timing off.

Since the Elf's video is basically just a straight bitmap of a chunk of memory on the screen (the lowest resolution, 64 x 32, is a map of the Elf's base 256 Bytes of memory straight to video.) So, if some other circuit could just read a relatively new, fast RAM in the time when the 1802 isn't reading it, then the other circuit could just pull the data then run it to video, and leave the Elf none the wiser.

That way all the Elf has to do is manipulate the data in the section of memory on the screen. And a 3-cycle instruction won't cause any problems. And the 1802 gets about 40% of its time back, during which it would otherwise have been doing DMA of that memory data to the Pixie chip's shift register.

So:
Video that doesn't need the unobtainable Pixie,
No 3-cycle instruction timing issues,
No loss of 40% processing time to video DMA, and
Able to run at higher clock speeds.

Sounds like a winner, right?

Implementation Details

Next was how to actually implement it. I've looked at several ways, with various advantages and disadvantages.

Using faster RAMs was a first building block I looked at. For example, a static RAM pulled out of a 486 motherboard's cache RAM would be more than fast enough. Both 20nS and 15nS are readily available. Plenty fast to grab a byte once every 1802 instruction cycle for the external video system.

Then, came my initial thought, maybe use an Atmel AVR microcontroller to do the grabbing, put the data into its internal RAM, use that as a frame buffer for some bit-bashed monochrome video. No big deal.

No big deal if you've already got the ability to program AVRs or are prepared to supply them preprogrammed. Still, not a bad solution. Just not likely to be popular as I'd like because of the hurdle of programming the chip. The idea wasn't just considered with an AVR, any of a number of uC families could work. But they had the same problem.

Another idea was to build a circuit from random logic. Not as appealing, with my schedule, but if I could be the pioneer on this and put something together then anyone could order the parts and start wiring. It would probably add about 1/3 to double the work to assembling an Elf computer. Again, not perfect, but possible.

Then I thought about taking the AVR idea and putting in a Propeller board. The advantage here is that, rather than getting a bare microcontroller and having to get the infrastructure to program it to do the One Job that that user may ever use it for, the board itself is all the infrastructure needed. (Yes, Arduino occurred to me, too.)

A download on a computer, a USB cable of the correct type (I'm using a cell phone charging cable), and you're in business. Even if the user never does anything with a Propeller again (what an unfortunate thought!) then they wouldn't be out anything but a bit of their time to get the "part" they need programmed for the job.

And it's less time than hand-wiring random logic on a perf board, no matter how you look at it.


It Just Keeps Getting Better

So, I started with the idea that the speedy (80MHz) Propeller would have no problem sneaking in and reading bytes out of the Elf's RAM during those long, lazy ~200nS slack periods. Then it could put the data into an internal frame buffer.

And, what hey!, the Propeller has built-in video. I could make it so that the final Propellor program puts out Composite baseband, Composite broadcast, and VGA all at once. What a deal! The user doesn't even have to pick between different programs based on what sort of video they want right now.

Then came the next idea. Replace a chunk of the Elf's RAM entirely. Let the Propeller be a chunk of RAM.

It can't replace all the possible RAM of an Elf in its present version. It's only got 32K of RAM in it, and it needs some of that for any applications it's running.

But, it can replace a chunk of the Elf's RAM. Enough for Pixie-quality video. And with some lines used for control, the video resolution and frame buffer location within that memory can be changed. I don't know yet, but it seems like mapping somewhere from 2K to 4K wouldn't be that difficult.


Pixie Compatibility

At first, I'm going to concentrate on a limited memory map, and on replicating the basic Pixie mode of 64 across by 32 high. In spite of the fact that there won't be any actual DMA transfer needed, it should be able to display video from the basic Pixie video programs like the iconic Starship Enterprise video from 1977 without modification. They'll just run faster as a result of no DMA overhead. I think.

Then looking at the exact details of how an expanded Elf uses the other modes (if it does, I've never done so myself) will let me look at expanding the Propeller's memory map area and responding to the Elf's control to do that.

So, if I can do what the Pixie does without requiring DMA, I'm already getting a system that's about 40% faster even if I don't move up the clock speed from ~1.79MHz. (The 1802 was the original overclocker's chip, way back in the 70's, but that's another story.) If I can increase the clock speed even more, with no effect on the video (since a hard-clocked Pixie chip isn't there any more), then I've got a system that'll run such things as Chip-8 and Tiny Basic that much faster than an original Elf with a Pixie.

If the 2.5MHz setup turns out to work (I see no obstacles at present, but that just means I haven't run into them yet), then I'm getting a system that should run about 2.3x faster than the stock Elf. It'll still be no speed demon (that's not the point), but it'll be nicer to use.

Next Steps, Baby Steps

I'm going to write a program for the Prop to make it pretend to be a RAM for the 1802. It will be a simple 256 byte memory. That avoids the multiplexed signals for the address. It'll (hopefully) receive and store data bytes from the 1802, and deliver data from its store on request.

The first pass is going to be really, really simple. I'm going to set up an array, put in a short program to blink an LED on the 1802's 'Q' output, and set it up to respond to the 1802's memory read requests. No writes required. If that works, then I'll add write capability and put in a program to test that (again using Q as an output.)

If that works, I'll proceed to give the 1802 some more sophisticated output and input capability and see where it goes from there. But best not to plan in too detailed a fashion too far ahead until the immediate problems are solved.

Thursday, July 19, 2012

8085 Monitor Code and Other Distractions

I've been working on an 8085 microprocessor project for about 3 years now. It started with a simple circuit on a breadboard, moved to my HP Logic Lab, where it became a real computer, then got built into a permanent version on a prototyping PC board.

I've been documenting the thing since it moved to the Logic Lab on my web site, posting how-to articles on building both versions (the hardware is identical, only the construction method is different), as well as software to control the hardware.

Where I've come up short is putting together a sort of unified OS for the system online that allows software to be developed right on the system itself, as well as any high level languages. I've been writing lots of software for my own use with the system, mostly hand-coded using my assembly coding forms and my 8085 Pocket Reference Card. But I keep either getting distracted from or otherwise dodging the job of integrating all the software bits I've already got into a simple "monitor" program (sort of a mini-OS for machine language) for the system.

Part of it is the usual life distractions. I've been sitting next to a wild fire this past week, for example. And then I've got lots of other electronic toys I like to spend some time with. Each one has its own appeal.

Before the fire, and a bit during (when I was taking a break from cutting ever more brush around my property) I've gotten back to work on putting it together. The biggest part is the part that reads the keyboard, determines the current system state, and dispatches keystrokes and execution to the right place. All the hardware interfaces are already in place, most of the basic system routines are in place (timing/delays/string handling), etc. So the "glue" is pretty much all that's needed. And I got it about halfway done before the fire started, I'm writing the code that actually takes actions for each mode, or simply hands over the necessary info to user apps running on the system.

So, if I can get time away from deck repairs on my house this weekend (now that it looks like it's unlikely to burn down or that I get evacuated), I'll be trying to wrap up and test that code.

Small Thing, Big Obstacle

The other thing I'm looking forward to is replacing some of the switches I put on the MAG-85. I put in switches for various interrupts about two years ago, and the switches themselves turned out to bounce and make so much noise that I've given up on them. No amount of debouncing, hardware or software, within reason has made them reliable. I'd hate to have someone else construct a MAG-85 and have to deal with this. It's been a thorn in my side ever since I added them, and took a lot of the fun out of the permanent hardware project for me (on the breadboard version, I used some old keyswitches out of a knackered Mac Plus keyboard, they worked great with only the most minimal hardware debounce. But I figured I could hardly specify 25 year old keyswitches in a project that others might want to build.

I'm expecting a shipment of a bunch of different switches tomorrow that I can test and select from to replace the awful switches I have now. So I can put that behind me (and probably re-simplify the circuit to take out a bunch of the extra parts I put in to try to deal with the noise on these switches.)

Frankly, the old switches are something I'd about hesitate to use in a doorbell circuit, never mind real electronics.

Friday, July 13, 2012

Propeller: Things That Make You Go Hmmmmm...

Propeller quickstart board and a telephone rotary dial...do I feel a project coming on?

P8X32A Propeller Quickstart Board...Telephone Rotary Dial....Hmmmmmmmm.

Tuesday, July 10, 2012

Gadget Gangster QuickVGA+ for Parallax Propellor

Gadget Gangster QuickVGA+ board for the Parallax Propeller Quickstart Board
Propeller Quickstart Add-Ons Galore

I mentioned that I picked up a Parallax Propeller Quickstart Board a while ago, and finally got around to playing with it.

Well, I've played with it a lot more now, in part thanks to a cool add-on board from an outfit called Gadget Gangster. I bought a Quickstart VGA+ board from them. It adds a nice VGA port (duh) as well a a PS/2 keyboard port (that's PS/2 as in the old IBM "Personal System/2", not Playstation 2), a stereo audio output, and a Wii Nunchuck port (or the classic controller can go here.) All on one little business card sized board, that fits right on top of the Parallax Quickstart board.

It's a nice combination of features. It's also got places for adding an SD card connector (which I've done), an IR port, and a composite video port.

There's a Pocket Mini Computer project built around this board, that's sort of a C-64 retro-clone kind of system. I look forward to trying that out some time when I get tired of playing with the Wii nunchuck as a raw sensor in software.

One note about that...

The nunchuck software that Gadget Gangster recommended didn't work right out of the box. I've posted na updated version of the nunchuck demo software at the Parallax Propeller Object Exchange, the place where Propeller users post their software for others to modify and use. I modified the demo to use the correct I/O lines for the QuickVGA+ board, and I added a bit of code to make it more clear what's going on with the accelerometers in the nunchuck.

Next, I'll be building up my Gadget Gangster QuickProto board, which came in a bundle deal with a QuickStart board (so, yes, I now have two Quickstart boards. And I ordered one for my daughter, too.

Pennies from Heaven
That's not all the Propellers running around the house, as it happens. While we were at the Parallax Robot Expo, my daughter and I picked up some stuff off the Free table. Among those items were a pair of slightly used Propeller Proto USB boards and one one-off PCB Propeller board that appears to mainly for servos. My daughter has one of the Proto boards and I have the other, as well as the one-off PCB.

I've already added PS/2 mouse and keyboard ports to my Prop Proto board, as well as a composite audio/video adapter. I have plans for this board...

I also found an early rev of the Parallax QuickStart Proto board in my bag of goodies, which I'm planning to use to put a sort of Elf II I/O interface on a Quickstart board. A hex keyboard, a pair of hex displays, some control switches, and hey-ho it's 1977 all over again. :)

Bottom line on the Propeller: It's heaps of fun to play with. Quick and easy. I keep thinking I ought to stop and read the documentation, but it hasn't been necessary, yet.

Monday, June 4, 2012

Parallax Propellor Quickstart Board

When I was at the Parallax Robotics Expo back in April, I picked up a P8X32A Quickstart Board. I've had my eye on the Propeller chip for a long time, I first heard of it before release. But to date, time spent on other things kept me from getting started with it.

Parallax P8X32A Quickstart Board, shown inside an Altoids Tin for scale.
The P8X32A Quickstart Board, in the obligatory Altoids tin for scale.
At the robotics expo Parallax was cutting some great deals on their products, however. And I figured that if I had something on hand it might make it a bit easier to take a first look at the Propeller. With the discounts they were giving at the show, I seriously considered the Propeller Starter Kit, which includes the Propeller Demo Board. But, since I had no idea how long it would be before I actually got a chance to use it, I decided that a smaller investment would be wiser. So I got a Quickstart board.

The Time Window Opens

Finally, last Friday afternoon, I had a chunk of time open up as a result of being too ill to make a prior commitment. (I can't help but think of how many important things in my life have happened because of time made available through being too sick to do what I was supposed to do, or because I was stranded somewhere without transportation.) The tiny box for the Quickstart board had been living next to one of my computer keyboards for over a month, so I didn't even have to go looking for it. There was a USB cable of the correct type sitting in the drawer next to me. So I started in.

To keep it simple I decided to start out with the Parallax programming environment, Propeller Tool on my Windows 7 machine. I expect to try out Brad's Spin Tool, the other major development tool for Propeller, on my Mac and Linux machines a bit later.

I was pleased to see that it installed everything, the driver for the USB interface chip on the Quickstart board, the IDE, and the serial terminal tool for the Propellor. It says it does that before you download and install, but I was a bit unsure of whether the Terminal program would actually be in there before I actually installed the program.

Once the software was installed, I started it up then plugged in the Quickstart board. I got a ding-dong from Windows telling me it saw the hardware. I was able to find a menu item to report what chip version the software saw, and it saw my chip. I didn't get the message I expected from the documentation at first, a dialog pop-up. That appeared later, the first time I tried to send a program to the board.

About that...

Parallax has a tutorial that runs you through the initial steps of programming the Quickstart board. Aside from the fact that to get to each of the several pages you follow a link from the product page (the pages aren't linked together--that is, when you get to the bottom of one page there's no link to the next one) there's another oversight on these pages as well.

The page gives a short, simple program, describes what it does, then goes on to a slightly more sophisticated program. It tells you about how the editor behaves. But it doesn't tell you how to actually get the code on the chip.

Now, I'm not exactly a first timer here. I looked at the menu and saw "Run" at the top of the screen. But when I chose that I got a list of items, none of which were clearly "compile and run". So I started selecting things at random. I also didn't know whether to expect a simulation environment to start, or how to select between a simulator and the real hardware. I scanned the tutorial web page up to the third program in it, and found no specific instructions.

This is one of those things of which Sherlock Holmes remarked, "the answer is perfectly obvious, once you know it." There is no simulation environment in the free tool, that costs extra (I wish Parallax could follow Atmel's example in providing it with the free tool, the first job I applied an AVR to happened because I was able to use the simulation tool to verify the software timings I needed). And the Run=>Compile Current=>Load EEPROM or Run=>Compile Current=>Load RAM would either have done enough to let me get what I wanted--the sample code running on the target hardware.

But that wasn't clear. I was doing a quick try it and see, so I hadn't spent time with the chip's datasheet yet to learn about how it loads and runs software, what facilities it has for storage, what the implications are of loading EEPROM vs. loading RAM (I didn't even know if they were both program storage, for all I knew one was program storage and the other was for loading an initial state for testing.)

So I was a bit stymied until I just decided to try things at random and see what happened.

At the bottom of the tutorial page is a short table describing the shortcut keys and what they do. Below the actual step-by-step examples that omit instructions for how to compile and run the code. There they describe F10 and F11, for example, and the implications of each choice (RAM or EEPROM).

I managed to get the short initial program (which lights an LED) loaded and running. The LED lit up. There was much rejoicing.

Demo, Demo, Who's Got the Demo?

The next thing I started to wonder about was the demo that showed off the touchpads by lighting up the LEDs when the pads are touched. This hadn't appeared to be pre-loaded like demos often are on other processors. Nor did there appear to be anything obvious among the Propeller code that came with the IDE (another thing I fouond by spelunking, rather than as a result of documentation.) There were no projects with names like "Quickstart Board Demo" or "Touchpad Demo" or whatever. Nearly all the demos appeared to be for hardware I didn't have.

At first I went back to the Quickstart tutorials on the web. I did searches for the demo on the Parallax website and through Google, but the closest I came up with was modified versions of the demo listed in a forum thread. I did find the demo later, but it was only by going back and looking for it time after time that I finally located it today, my fourth day trying, here, on the Parallax Semiconductor Quickstart product page, not on the Propeller downloads page or the Parallax store product page, where there are lots of other downloads but not this one. The location is so subtle, that even after I found it earlier today it just took me five minutes of backtracking to find the right page over again. It's on the linked page, at the end of the list of downloads at the bottom. Direct Link to P8X32A Quickstart LED-Touchpad Demo Software Download

Well, back to last Friday. The demo was interesting to me, but while I was looking for it I came across information about the video capabilities of the Propeller. Frankly, while I'd heard the Propeller did video I just figured it was something along the lines of bit-bashed video like what I've done with several other microcontrollers. I've even generated video from a little Atmel AVR ATTiny12 with nothing more than a software loop and a resistor divider-no fancy shift registers or anything.

Needless to say, it was pretty simple video. No colorburst, no color. But it got some data out of the chip on a two-pin interface nicely (three levels are needed for even black and white video to get the sync signals for timing in addition to the black and white pixel data.)

The Propeller goes way, way beyond this. It's got both NTSC and VGA signal generation hardware on board. And it's not just single bit video, either. It can generate full color at a number of resolutions. This I had to try. But my little Quickstart board doesn't have a video output on it. I looked at the Propeller Demo Board and saw that it had a video interface. It was just a 3-resistor DAC and an RCA jack, so I decided to make one.
Three resistors on an IDC header to an RCA jack, a fourth square pin on a wire provides a ground. Simple quick and dirty video interface for the P8X32A Quickstart board.
It doesn't get much quicker and dirtier. It took longer to pull the parts than to assemble.
Five minutes later, I had my own video interface. (Note: the accepted way to do this if you're not buying your Quickstart board at a Parallax event is to go to Gadget Gangster and buy your Quickstart board with a free QuickProto board, which includes the video interface along with lots of other cool bits that make the Quickstart board so much more ready-to-go.)

At first, the video was pretty wonky. Then I realized I had the resistor that's supposed to go to P13 going to P15. Once I fixed that, it was better, but still not good. The graphics demo ran OK, but one of the text demos wouldn't even sync properly, the other had poor contrast. The thought crossed my mind that either the timing in the Propeller wasn't much better than my old Atari Pong or that the provided 5MHz crystal meant that the timing wasn't right on. But I've learned the hard way, time and time again, to suspect user error first.

I took another look at my too-quick and too-dirty video interface and found that the resistors that were supposed to go between P12 and P13 were swapped. A moment later I had pulled the pins from the plastic IDC header and moved them to the correct positions. This is why doing electronics when you're under the weather is not always a good idea. Fortunately this wasn't a mistake that'd let the smoke out.

Once that was done, the video looked great on all the demos. I then had to make a batter adapter so that I could try it out on the big screen TV in my living room. A quad-AA battery pack came to hand quickly enough, one of the ones with the 9V snap connector on the end. Then I soldered a 9V snap connector pigtail to another IDC header and spliced in an IDC socket pigtail. That way I'd still have a place to put my ground pin from the Q&D video interface without soldering it to the battery connector. As a final refinement, I put a slider switch in line on the positive battery wire to act as a power switch.

Once that and the video adapter were plugged into the Quickstart board, I took it out to the living room and hooked up to a free composite video input. It looked great.

I spent the rest of my time mucking around with the code of the various NTSC video demos. Spin is a really easy language to learn, if you've got any programming experience it'll come easily. It's even simple enough I could easily see teaching it as a first language to new programmers (with a Propellor board on hand, of course, to get the full hardware + software experience!)

Cleaning Up Shop
As mentioned, I have now been playing with my new board for four days. In spite of a couple of minor snags (figuring out what to choose to compile and run my programs, finding the demo for the Quickstart board itself) it's been a very rewarding use of my time. I can see now why people who've been raving about this controller for five or six years are excited about it.

After my first night playing with the board, I learned while reading the Parallax forums that the Propeller could also do RF-modulated video. That's what I spent mid-day Saturday playing with. I put rabbit ears on my big screen (talk about incongruity!) and a pair of jumpers on the RCA jack and managed to receive video from the Propeller from about 25 feet away. What worked even better was an LCD handheld television left over from the analog TV transmission days. I was able to get as far as 10 feet from my back door, a total distance of about 50 feet, and still get good reception. I don't think it's making it as far as the neighbor's house, though (fortunately.)

Yesterday I buckled down to some more structured learning of the Spin language, reading the Propeller Manual and Datasheet, and so on. Today I'm continuing that, working my way through tutorials and such. I also took a look in the goodie bag I filled at the Free tables at the Robotics Expo--there are a couple of Propeller boards in there that I can't wait to find a use for now.

Saturday, April 14, 2012

Parallax Robot Expo

My daughter and I went to the Parallax Robot Expo today and had a great time. It's neat having an event like this only about 45 minutes from home. My daughter has a couple of BOE-Bot robots and a bunch of additional bits she enjoys working with. I'm into anything microprocessor or microcontroller, and I think Parallax is a neat company that makes great products.

We were a bit worried about the effects of the weather. We left home with snow falling and drove through a rainstorm on the way. When we got there it was only raining lightly. Even if it had been raining, everything was either under tents or inside. It wouldn't have been necessary to do more than duck across a few feet of open sky in any case.

As it happened, things only got better as the day went on.

We had a really good time checking out the various demos. They had one tent with activities, laptops set up with boards hooked up to them, where people could get their feet wet in programming the boards to blink lights, run servos, sound off on piezos, etc. We didn't check that out too closely since we can do that at home.

Alongside it was a "Learn to Solder" area with several tables. You get to build an S-2 Robot LED badge. Not an actual S-2 robot, mind you, but a little hang around your neck badge shaped like the Scribbler 2. It's got a 7 color LED on it, a switch, a battery, and some embedded logic. It's a neat little thing and it's entirely free.

I made one though I'm no newcomer to soldering. I had a very helpful young man making sure I prepped the parts properly and oriented them correctly. He was also able to answer my question about whether there was logic embedded in the printed circuit board.

The factory tour was also very nice. They've got some automation they're justifiably proud of. I'm surprised that it appears they actually use a CNC to make production parts. I'd have expected higher volume processes for them.

Also inside was a display and presentation area. They had various talks through the day by very approachable people with experience directly relevant to the attendees. It was interesting hearing them talk about their technical work and their work to develop their businesses. We weren't able to hear all the talks we would have liked.

Back outside there was a tent for vendors--the show prices on Parallax products were amazingly great and there were some other great vendors there, too. There was a show area for other companies, Panavise was among the companies showing there (IMO, yes, their products are worth every dime!)

Maxi Swag
One of the coolest things there, a real unexpected treat, was the Free Stuff tables. There were all sorts of free parts and boards and what I assume was discontinued product. I got a book on programming the SX microcontroller, a Javelin Stamp, some sort of Propellor board, and some other fun electronic "Junque Box" goodies, as did my daughter (I think I'll be showing her how to wire up a CCFL this weekend, there were packs of them on the tables.)

We both had a great time. My daughter got really excited and couldn't wait to get back home to her electronics. We picked up some great project books for her, cheap.

If you get a chance to go on Saturday, I recommend it highly.

Monday, January 16, 2012

8085 Resurgent: Back to the MAG-85

I took a look at my own website the other day and realized it's been far too long since I have updated certain items there. Most noticeable to me was my MAG-85 project, an 8085-based micro trainer. There are a bunch of things I thought I'd posted over a year ago, but the information isn't there.

Obviously I never did it. Sorry about that.

The front page image for the 8085 project is was a really ugly thing I took at a very interim stage while I was doing regular updates on the project. If I wanted to scare someone off the project, I think that picture is what I'd use. What a rat's nest!

8085 microprocessor computer project SDK-85
A Very Ugly Looking 8085 Computer

Shortly after that picture was taken I built a real front panel and enclosure. It's been happily living in that enclosure for well over a year, but I never posted the info on it. In fact, I've just started taking it apart in preparation for making some improvements. Since I like to take pictures of my work for documentation purposes (like getting the right connectors back in the right places), I took some photos of the partially-disassembled unit as it is before I make the updates. Here's one:

MAG-85 8085 computer project in (and partially out of) its enclosure.
A bit ugly with the top and bottom panels off, connectors and wires trailing out, but not so bad at the first photo.

Here you can see that it's not so ugly as before. The LED to the left of the LCD display is controlled by the 8085's SOD output. The eight LEDs below the display are controlled by the 8085's OUT 01 command and held in their state by a register. The eight switches below that are on the 8085's IN 01 port.

The program that's running currently reads the position of the switches and outputs a byte to the LED bank to match what it sees on the input. It also reads the keyboard and sends the ASCII + 0x30 character to the screen that it reads from the keyboard (which is in IN 00). The LCD display is on the OUT 00 port.

The four push buttons above the keyboard are, from left (red) to right:
RESET
TRAP
RST7.5
RST5.5

In the crude OS/monitor I have running on the MAG-85 now, TRAP acts as an "Escape" key that returns control to the OS. This allows miscreant programs to be stopped and memory examined any time, since TRAP isn't maskable.

RST7.5 is used as the user vector to the start of the application program in memory. In essence, this is the "GO" or "Execute" button.

RST5.5 is also a user vector that should point to a subroutine that does something and returns. Either the application program should initialize it, or it has to be initialized by hand in the monitor.

The keyboard itself uses RST6.5 to read a key value into a buffer, where an OS routine can pick it up/translate it, etc. The keycap legends allow for several uses of the keys. The typical use of the yellow keys is hexadecimal number entry. But they can also be used as arrow keys and fire button (9) for games.

The top row has the IN or Enter key in red, the backspace (BK) in blue, the (M) mode and edit/view (e/v) keys in gray. The edit/view key toggles a flag in the OS that switches between a memory protecting mode (view) and editing mode in the various modes. The mode key modifies the mode variable in the system to switch between Memory, Register, and I/O port viewing or editing. It's possible to change from editing to viewing and back again while in any of the modes.

Current Rework
My current plans for reworking the MAG-85 have to do with replacing the buttons used for RESET, TRAP and the RSTs, plus improving the ergonomics of the unit a bit by replacing the top and bottom panels, which were hand-made on hardboard, with some nicer CNC'd panels that reposition some of the controls (I switched off the unit more than once when I meant to switch on or off the backlight for the LCD.)

The four pushbuttons above the keyboard are some really awful buttons. They ring like bells, causing all sorts of debounce problems. I have both hardware and software debouncing on them right now, and it *mostly* works. I think part of why I stopped posting before was that I wanted to kill this problem before I posted, so as to avoid causing anyone else the headaches I've had with these switches. And, as you can see, those switches are still in there.

A sideline on the current work is also preparing for a couple of improvements that have been planned since the beginning, but haven't happened yet. One is to put a nice little door in so that the NOVRAM/EEPROM/EPROM memory can be swapped out without opening the whole unit. Right now I unscrew the end pieces then pop out the face panel to get at the memory socket. Which is not clean, quick, or easy. I have to get the cables all to go back to their places each time I put it back together. A couple of times I've pulled out one of the input cables by accident, then wondered why the keyboard or switches aren't talking when I get it back together.

The other is preparing to mount a battery pack inside, so that I can go cordless with this thing. A lot of what I've been doing in design tweaks is reducing the power used by the system. Finding a good brightness for the backlight, putting a switch on the backlight so that I can turn it off when I'm in good light, reducing the brightness of the power LED and the I/O LEDs, that sort of thing. As well as looking at my nascent OS to see if there are things I can do there that will reduce power while staying out of the way of the user's programs.

I'm also planning on adding another memory socket for an EPROM. It won't have a ZIF socket in it like the one that's there now, but I'm getting to the point of wanting a more permanent memory for the OS, with the NOVRAM left for user space programs. This may involve some rejiggering of the memory map, since for mechanical reasons I'd like to have the removable memory in the center of the PCB side to side.

I'm also looking at adding an expansion port or two on the new end plates, which begs the question of whether to use a standard connector with a more or less standard wiring for the port (like a PC's bidirectional parallel port) or whether to roll my own for the sake of less constraint on how the lines are used. Basically just bringing the lines from the I/O buffer registers straight out of the box along with power and ground. This is my preference for a number of reasons, but there's an appeal to letting the MAG-85 drive standard I/O devices, too.

The addition of a serial port, either at TTL levels or a standard RS-232C port is another possibility I keep playing with. I've kept SID available for this possibility, though I was tempted to use it as either a user digital I/O or a memory bankswitch I/O or something of the sort. Being able to connect the MAG-85 straight to a terminal would be really nice, so I'm leaning toward an RS-232C port.

But if I do that, I'll need a separate set of I/O routines for that port that allow it to be the primary user I/O for the OS. So if I do that, it'll probably be deferred until some other things happen first.

Like finishing the OS well enough to post it.

Finishing the MAG-85 Operating System
I haven't posted the OS yet because it's still in a fairly yucky developmental state. That and my test hardware still has the crummy switches that do nasty things to me at times, and there's lots of code in the OS just to try to deal with them that can probably come out once I've got decent switches in.

So my present software objective is to clean up the OS and put a bow on it so that I can "ship" it to a download on my site. I may end up cutting some of the features, but there's also the possibility that I'll end up cleaning them up because I've already got too much other code running around that uses them. The core basics are the ability to view and edit the system's memory, and execute a program starting at some given address. I'll guarantee that much.

I think the ability to view and edit register values is pretty well sewn, too. It's possible to bork the OS by doing something stupid here, but I'm not trying to protect the user from being stupid. Press TRAP (probably to be labelled ESC) and the OS will restart and reset any values it needs to function (I hope!)

The ability to view inputs and edit outputs interactively shouldn't be a problem, either, but it's not an immediate priority. If it happens effortlessly, it'll be in the initial release. Otherwise, I won't hold up the release for it.

I've also got another mode that is enabled in some versions for viewing/setting user variables in the OS, such as the RST5.5 and RST7.5 vectors, selecting different display formats, setting the size of the LCD, and so on. I can pretty well guarantee that the full-up version of this will not be in the initial release, though a cut-rate version of it may be. That would mean that you would set these variables yourself when putting the code on your own MAG-85.

If I do go to a design that uses two memories, one rewritable in system and the other not, I'll have to put these variables in RAM, or expect that they be set once and for all in the firmware. I'd like to be able to provide an EPROM at some point for people who want to build a MAG-85 and get up and running without having to put the OS in themselves. But if I do, I either need to put some system information in the NOVRAM memory space (like the height/width of the LCD display) or require that only one size of LCD be used with a specific version of the OS. Since I'm looking forward to building another MAG-85 with a larger display (perhaps 32 characters wide by 4 lines high), I'd like to keep the OS flexible. The initial display routines will only use a portion of displays larger than 20x2, but the user programs can use the larger displays and the OS can be updated later to have multiple display formats that the user can select.

But right now, I just need to drive a stake through its heart and get a workable version out the door. :)

Thursday, January 5, 2012

New Call Sign, New QSL Card

I'm afraid I didn't care much for the Extra class call sign I got from the sequential call sign assignment. It was AG6HU. Before it was assigned, I saw on AE7Q's site that I was likely to get a call from the range AG6HT through AG6HV. I would have been OK with HT or HV, but I was hoping I wouldn't get HU. The call sign is funky enough with the AG prefix, but with an HU suffix there's just nothing there to love.

I tried, I really did. I tried coming up with interesting phoney phonetics for it ("higher up", "hugely unpopular", "hic up", etc.) What really sealed that call's fate for me was when I tried telling it to people. It took a minimum of three tried to get it across, even when using the International Phonetic Alphabet. That's bad.

So I applied for a vanity call sign. I didn't rush right out, though I didn't dilly-dally, either. Knowing that I wasn't going to stick with AG6HU meant that I wanted to get a new call before establishing much of an identity with that call. I spent a lot of time thinking about what I'd want that'd still be fun to have 20 plus years from now.

Since retrocomputing and microcontrollers are both hobbies of mine, the call W8BIT seemed appropriate. That's what I put at the top of the list, and that's what I got. W6CPU and W6TTY were high on my list. I didn't realize it when I applied, but another ham applied for W6TTY a few days before I did, and got that assigned during the 18 day waiting period for my new call (not that it mattered, since my first choice was available, but an example of a good reason to have more than one choice and be prepared to not get your first choice.)

Another one that would have been a lot of fun is KO5MAC, since I'm a fan of the COSMAC microprocessor (the RCA 1802.) It's a bit more specialized than simply "8BIT", so it ended up as a lower preference. Beyond the first three choices I listed, I didn't worry much about the order of the other calls I put on the list relative to my preferences. Any of them were better than AG6HU, and I pretty well expected that things weren't very likely at all to go past the top three. KO5MAC would probably have been my fourth choice if I had arranged them. It's an awfully fun call sign, just like the 1802 is a really fun chip.

I considered having a call with my current favorite microcontroller referenced, the Atmel AVR, like, say K6AVR or W6AVR (no idea if these are in use or not.) But that seemed potentially even a bit more narrow than the COSMAC reference, especially when viewed from the perspective of 20 years from now.

I got my new call sign on the 4th, I'd already figured out what I wanted to do for my QSL card. I got 100 of them printed up today. Here's what it looks like:

QSL card for W8BIT, lots of 8-bit processors in the background, and one video chip that I mistook for a 6502 processor.


Ready to Go, Almost
On the more practical side of amateur radio--making actual radio contacts--I'm still moving things forward. Yesterday afternoon I replaced the towels stuffed in the window where the antenna cable comes through with a purpose-made wooden feedthrough. It looks a lot less "redneck" than the towels stuffed in a window casement.

Unfortunately, I don't have a good ground to the transceiver in its temporary home yet. I'd hoped to have time to pull that in yesterday but time ran short. But that's next. I'm not too worried about the ground when I'm just listening in, but before I key the mike I want to have a good ground on the radio's chassis. Then I'll be ready to jump into 40 meters, and possibly 15 meters.

I've been listening in a lot on 40 meters over the past week, and I'm starting to get a pretty good feel for the band. Like what frequencies folks are using pretty commonly, what sort of traffic is going on when (daily nets, some of the weekly nets, and so on.) So I'm pretty confident I won't seem to be a complete and total lid when I do key up. Though I'm prepared to make _some_ mistakes, it's part of the learning process.

Then the next major step is to clear out my corner of the garage, put in an AC/heater unit in the wall, a raised floor, and a bit of insulation. A few more touches like a mecca ground plate and feedthrough panel then I'm ready to put in shelves and furniture.

Somewhere in there I want to get or build a decent morse key or keyer. All I have on hand right now are a couple of ones of about the quality that were in kid's science kits 30-40 years ago.

So long as my current antenna keeps me going, I'll just go with it until the new radio shack is done before hanging up a new multiband antenna. A G5RV has been highly recommended to me by at least two hams. I've got a good idea of where a full size one would go on my property, and I'm looking to see if I can fit in a double-size one at right angles, more or less, to the first. That'd (hopefully) get me on the 160m band, too.

Lots to do, lots to do. In the meanwhile I'm going to grab my HT and make some contacts on 2m simplex.

73

Tuesday, December 6, 2011

A Neat Modern Retro Computer: The New Elf II

My first computer was a version of the COSMAC Elf computer. It was a simple little computer, costing about $100 in 1976-77. The joy of it was that is was a real computer, and yet it was direct enough in its design philosophy that every single thing about it was understandable and controllable.

I wasn't the only one enchanted by this small computer system, or to have it as a first computer.

Marc Bertrand has built a new, modern version of the Elf computer, visible at his web page. It's not only got the original video interface of the Elf II, but a nice LCD display driven by a PIC microcontroller. It has heaps of I/O ability through several interfaces, nicely controlled by some Altera programmable logic.

Have a look, it's a sweet little system with a layout very reminiscent of the original Elf II computer.

This isn't it, but it's a little test circuit for the 1802 microprocessor used in Elf systems that makes a nice LED blinky:
RCA 1802 test circuit that happens to make a very nice little LRD blinky
A simple test circuit for the RCA 1802 processor used in the Elf computers

Wednesday, November 9, 2011

Low Level Computer Teaching Options

We have a current discussion on the COSMAC Elf Discussion Group that centers on the idea of a small computer to teach low level computer concepts. Many of us in the group got our start with the COSMAC Elf as our first home computer. It is a small, simple, inexpensive computer. One of its finest points is that it is simple enough that a person of ordinary intelligence can understand how every part of it works, down to the lowest detail.

The place for a small teaching computer, as we're discussing it, lies somewhere between electronics and the standard non-computer science introductory computer programming class. It's a matter of teaching what the components in the system do, and how they do it. This becomes a model of what happens inside more powerful modern computers at larger scale. Such as in current desktops, laptops, tablets, and smartphones.

COSMAC Elf single-board computer with PIXIE video. A complete computer system in 1977 with only 13 integrated circuits!
The COSMAC Elf, this version includes video graphics.


Is anyone using something along the lines of a microprocessor trainer in the classroom today outside a college level EE class?

Personally I can see two general approaches to this, with several possible variations on the two themes. Let's look at them, then I'll go into Blue Sky mode to talk about what I sort of wish for.

Some Ways to Bring Computer Hardware into Class
One is to fake it entirely with present-day hardware. After all, if it's possible to do a complete chip-level simulation of an 8-bit processor in Javascript, it shouldn't be much of a stretch to simulate an entire simple 8-bit microcomputer in a program with the ability to "see" all the operations inside simulated on the screen.

The problem is that this still really fails to make what's being taught "real". To the students, it becomes just another show to watch--one with no particular interest to most of them.

The other approach is to use an actual old microcomputer in class, like the Elf, with the students handling the system, measuring voltages or using logic probes to "see" the signals in the computer. Something more sophisticated would be using chip clips with LEDs on the various lines as a sort of multi-line logic probe. (Here is a place where an Elf or other RCA1802-based system would shine. The 1802 is a fully static processor. It can run at clocks speeds from 0Hz on up to its maximum clock speed, with clock changes on the fly. I have literally clocked 1802 systems by hand by connecting and disconnecting the clock line to +5V and Ground lines, counting out machine cycles as displays show the status of various system lines. There are not a lot of computer systems that can do that!)

An annotated image of a COSMAC Elf computer, showing the location of CPU, memory, and other ICs.


Between these two lie many other options. One would be to have a hardware board that connects to a modern computer through a common interface, like USB, where some I/O devices could be visible controlled by the computer (via lights, motors, etc.) and with lines exposed that can safely be probed by the students.

Another would be using a more modern hardware platform, perhaps based on one or more microcontrollers that emulate the function of an older system, exposing such things as memory access, control signals, and so on to the students. The board could include displays and LEDs to show the status of the lines, internal pseudo-registers, and so on. The operation of the entire system, both inside and outside the simulated ICs, could be made available to the student's eyes.

Part of what needs definition is the acceptable limitations of the system. In my own case, I see such a system as being an introduction to low-level hardware operation and control of that operation through software.

Blue Sky Dreaming
If I could have what I wanted without any effort on my part or a significant amount of the school's money, here's what I'd like:

Step 1
I would want to introduce a basic system that's very similar to the original Elf of 1976.

It would have:
  • Toggle switch inputs (to associate signals with data and to help teach binary),

  • A binary LED display and a two-digit hexadecimal display,

  • Very limited memory (about 128 to 256 bytes)(to teach how much can be done in limited memory, and to limit the size of early programs to sane sizes.

  • Exposed memory and I/O lines, possibly with LED monitors

  • Extra monitors, like maybe dual color LEDs to show data direction on I/O ports, etc.

  • A simple machine language with whole-word mnemonics.

  • The ability to operate at extremely low clock speeds (0-100Hz) as well as higher speeds (1-10MHz or something like.)


Step 2
  • Hexadecimal Keyboard

  • 512B to 1024B of RAM

After the first few lessons, the toggle switches would get old and I'd want to introduce a hexadecimal keypad. This would teach hexadecimal, and continue the association of computer instructions with numeric values in the computer. Presumably the connection between signal levels and numbers has been made using toggle switches.

With the easier input technique, it'd be nice to add some more memory, up to something like 512 bytes to 1 kilobyte.

Step 3
  • Keyboard with instruction mnemonics and hex digits

  • Perhaps more memory, up to about 4K

Next, a keyboard would be attached. Perhaps writing software to interface the keyboard to the system would be one of the Step 2 projects. While I'd be tempted to use an ASCII keyboard, I think a raw matrix keyboard would teach more. On this keyboard, machine language instructions and hexadecimal numbers would be mapped to each key. This would again speed programming, and reduce errors. The simple machine language I envision has a particular addressing mode associated with each mnemonic, so there's still no assembling of code required.

Step 4: A larger step
Next, I'd move to a more abstract level. I believe that the activities prior to this point would teach low level operations well enough to take this jump and still be able to show the connection between the two.

For step 4, the computer would get:

  • More memory. Anywhere from 4K to 64K. Perhaps it would start at 4K and grow as the students hit the limitations of each memory size.

  • A terminal connection to a current generation computer for keyboard and display, or an encoded keyboard and some other form of text display.

  • New firmware (probably activated from on-board with a mode switch), which would provide a fairly sophisticated command line interface with command editing, recall, etc., as well as an interactive programming language. The specific language doesn't matter too much, it could be a BASIC, a bash-alike, a LOGO, or an interactive form of some other compiled language.

  • Mass storage. Probably some modern semiconductor memory.


The point at this step would be writing high level programs to perform low level actions like those seen in the earlier steps. Seeing line levels and I/O operations performed, using bitwise operators, seeing the signals represented as numbers of various bases within the language (which I'd expect to support at least binary, hex, and decimal for representation and constants.)

Step 5
The final step with the low level computer would be to produce more sophisticated programs. These would be longer programs, probably projects done by groups of students over a few weeks in class. At this point the understanding of the program control structures and data structures should be a bridge to programming in the chosen language directly on the modern computer.

Final Thoughts
These thoughts are somewhat half-baked as they stand. I or someone would have to do some more work to really define this and turn it into hardware and software and a curriculum to go with it. Some points that need considering are the demarcation between this and a robotics class, common in many schools now (including the one at which I teach.) Also, how much class time does this merit? And so on.

Personally I think that using a micro trainer level system is simple enough to be mastered by most middle-school level students. I've got some actual experience with students to back that up, in addition to my own experience (I was 14 when I constructed my own Elf.) For the students, the information not only gives them an understanding of the underlying technologies of current systems, but would open the doors to embedded systems, far more common than conventional general purpose computers. Either way, it would make the computer far less a piece of technical magic controlled by somebody else and far more something comprehensible, and therefore controllable, by themselves.

Some related work--a 4 bit TTL Processor.

The fact is, all the steps above would probably be unnecessary and involve too many changes to the hardware platform to be practical in class. A more reasonable approach would probably be to go from a slightly more capable Step 1 computer directly to Step 4. This would reduce the opportunity for student disorientation as a result of seemingly constant hardware changes, and still be enough to get the key points across.

The activities I envision for Steps 2 and 3 could be either dropped or performed in either the initial or final configuration of the system. This would also simplify the system itself.