Showing posts with label retrocomputing. Show all posts
Showing posts with label retrocomputing. Show all posts

Tuesday, April 22, 2014

Clearance on Propellor Quickstart Boards at Radio Shack!


I love this little board. I had two, now I've got five. Not to mention several other Propeller boards. One of these is probably going to become a terminal for my Ampro Little Board Plus CP/M computer system. I think I'm going to give it an LCD display, the Prop as a dedicated terminal, and build it into an all-in-one. An iAmpro, you might say. ;)

Wednesday, November 27, 2013

Holiday Video Game Classics

It's that time of year. The music starts playing in the stores. The decorations go up. You come home, have a look in the closet, and find your old video game systems there waiting for you.

I don't know why it is, but this time of year seems to turn our minds back to games past. Pac Man, Super Mario, Donkey Kong. Just yesterday I had a friend over, we ended up doing a tour of my old video game systems. Later my daughter came to me. She'd been going through her old Gameboy cartridges, finding old friends in a dresser drawer. (I play my Gameboy games with my GameCube adapter, which puts them up on my 55 inch TV. No squinting for me any more.)


The problem is, how do you hook up these older systems to the new TV you bought in the meanwhile?

Fortunately, I've got an entire web page on hooking up your classic video game or computer system to your TV. So you can not only hook up your old Atari or Nintendo, but also your Commodore 64, Apple II, Atari 800, Vic-20, or whatever.

If it's gathered some dust, or got put away with finger smudges on it, I've also got a page on caring for your older electronics.

I love the old stuff myself. I know I'm not alone. When we have guests over the holidays, I can't tell you how much fun we have playing fun, simple old video games like Ms Pac Man, Pong, Shadow Squadron, Space Invaders, and so on. Even Katamari Damacy is on the old-time play list now, though the PS2 hooks up to our modern TV just fine--this year. In a few years I'll probably have to create a web page on that, or a page on building a video modification to allow PS2 players to relive favorite moments with their older game system.

We gotta keep bringing our old friends along into the future with us, right? ;)

Tuesday, November 26, 2013

Why Electronics Took Over the World


How did we end up in a world where computers are everywhere?

Originally, we had vacuum tubes as electronic components. Each of these has to be hand-made. When you consider that even the most basic computer, about the power of a programmable calculator, requires about 4000 electronic switches in it (including some basic control, memory, and interface circuits), you can see that needing 4000 hand made parts is going to get expensive. And that's before you wire them together into a working computer. It's like having to hire a team of scribes every time you want to get a new book.

Each of those tubes is like a decorated capital drawn by a scribe.

Transistors were a big step forward. Transistors aren't made one at a time by hand. Packaging them involved some hand work back when they were new, but the guts of them were produced en masse. Making transistors was like printing a sheet covered in letter "B" so that you could cut them up to have a letter B to stick wherever you need one. Similarly, transistors are made in a large group, which is then cut up into individual transistors then packaged for use.

So why not print the equivalent of a small piece of often-used text, rather than cutting it up into individual letters? This is the basis of the integrated circuit. It was another step forward in reducing the cost of electronics manufacturing. The first circuits were like having commonly used words, in complexity. Over time, technology advanced to allow more and more sophisticated circuits.


Eventually the circuits got more and more complex, and more useful. Building a computer got to be about as complex as creating a book on a typewriter. That means it took patience, and skill, and it was still expensive, but not nearly as expensive as hiring a team of scribes.

Each integrated circuit has from a few to as many as a few hundred transistors on it at this point. Building a basic computer circuit could be accomplished with a couple of hundred ICs.

The next step was a big one. Integrating the entire guts of a computer onto a single die, then printing them not one at a time, but by the tens then the hundreds at a time.

In the mid 1970s enough transistors were printed together, in the right circuits, to make a basic computer. When added to some memory (which was another technology that had recently benefited from the improvements in integrated circuits), a few ICs for control and for interfacing to the outside world, a complete computer could be built out of a handful of integrated circuits. Like my MAG-85 computer project, which uses about 10 ICs to build a basic 70's style computer.

But that wasn't enough. It was enough for calculators and very simple computers that require someone with a high level of skills to get the most out of them. If we'd stopped there, only very technical or very driven people would have computers. We had to increase their complexity to make them more capable, and easier to use.

Since then, we've improved our "printing processes" to allow us to produce integrated circuits that contain not just a few thousand "switches", but billions. Your computer, cell phone, or tablet contains the equivalent of billions of vacuum tubes. And yet, those billions of sub-microscopic electronic switches all together require less electrical power to operate than one single vacuum tube. They also generate less heat.

If we put the entire world population to work building electron tubes as fast as they can, we couldn't produce enough tubes to reproduce the computing power of a single cell phone in a year. In part because we couldn't build tubes that can switch as quickly as the transistors in a cell phone.
the guts of a simple vacuum tube
Imagine building a few billion of these, by hand. Image courtesy RJB1.
But the computer in the heart of that cell phone is one chip that was printed alongside hundreds of others just like it in a mass production process that's very similar to printing. Many of today's computer chips literally cost less to make than a printed magazine or book. Far less, usually.

This triumph of manufacturing, reducing electronics to a simple, inexpensive, high volume printing process, is why we have computers everywhere from our cell phones to our irons and dishwashers. They're cheaper to build than the parts they replace.

Have a look at a current computer chip sometime. Inside it are several billion man-made structures. You could look at them with a microscope if the top were removed, but you would only see patterns, not individual elements. The individual elements are too small to see in visible light now.
There are several billion man-made things in this image.

Wednesday, November 13, 2013

Lee Felsenstein at Homebrew Computer Club Reunion

Lee was the MC at the main part of the club meetings back in the day, and he reprised that role on the night of the HCC reunion. He was also the designer of the computer in that day that I most desired, the Sol 20 Computer. I loved that system--the look, the keyboard, its operation.

Image by cellanr
There were just two things you wanted to know about the Sol to make life happier: Build the fully expanded system right at the outset. Opening up the heart of the system to expand it later was a major PITA. The other? Use someone else's disk subsystem. Though with the information available today a Helios disk subsystem could probably be made to work.

I still have the sales brochures for the Sol 20. I pull them out every now and then to drool over them again. Part of it is nostalgia, but part of it is the great design itself. Actual Sol 20s sell for more than I can afford, but perhaps I'll build myself a look-alike system from sheet metal and walnut wood sometime, anyway, and print up a nice black name badge.

I still have an Osborne 1 computer. This one is one I got only relatively recently. It is pretty well maxed out on upgrades (disk upgrades, video upgrades, etc.) and is a pleasure to use. It's not as pretty as a Sol, but I enjoy showing it off in current day computer classes. The kids love it--especially the floppy disk drives and the tiny screen. But...they get hooked on Zork.

Lee Felsenstein Today

In our conversation last Monday, Lee showed me a project he's working on today as an educational tool. It's a programmable logic simulator, targeted at middle school students. What Lee showed me was a pair of printed circuit boards that have captive fasteners to clamp them together around a plastic matrix. The matrix holds surface mount diodes, which the students can place into the matrix to program it. In essence, it's a 16 by 8 programmable logic array that is programmed through physically locating the diodes.

OK, I know that sounds totally abstruse to many of you, so let me tell you what makes this a great idea, and why your middle schooler ought to know about this stuff even if you've gotten through life without having to so far (assuming you don't know already).

The core of computers are built out of logic circuits. The memories feed the logic circuits with data (in current designs--it doesn't have to be that way though it's presently the assumption), in essence, the programmable logic is the complement to the memory. This analogy of the logic and memories being complementary components of a computer holds on many levels. It's possible to build logic out of memories--I've done it--but it's not efficient.

Initial education in logic circuits can be accomplished with a simple breadboard and some logic chips. A few AND chips, OR chips, NAND chips, inverters, and so on. Add some resistors and LEDs and the kids are off and running. For a little while. Once they master this, and understand what's going on, they immediately start expanding their ideas.

Then a problem hits. More chips and more wiring between them mean more complexity, and more difficulty in realizing their ideas.

At this point, it's possible to introduce them to programmable logic devices. Teach them that the logic functions they had in the ICs live inside the PLDs, and that they can program the devices rather than run wires. The problem is that this is a big, big jump up in abstraction level, especially for a kid in the middle school age bracket (which is the perfect age to introduce this stuff, which I'll go into later.)

Whereas Lee's invention maintains a physical element. The programming is accomplished by manually placing diodes into a matrix, rather than typing characters on a screen then punching the 'program' button to dump it to a Flash PLD. This keeps it from getting too abstract, encourages experimentation, and maintains the hand-on element that's necessary for students in the 9-13 years age range.

Building Blocks of Electronics

Electronic logic is building blocks. Your kids play with building blocks, right? They start with simple structures to learn how to build more complex structures. Before long, they can use every single piece they've got building large, complex structures. Once the individual blocks and a few simple ways of interconnecting them are understood, they can take off and make great big projects that reach to the ceiling.

It's the same with electronic logic. It's a collection of simple building blocks. The problem is, the complexity of assembly is a little greater. Enough that once you get past a certain level (I'd say 20-30 ICs), it gets progressively more difficult to implement your ideas. The ideas out-race the ability to construct.

This shouldn't be an obstacle. The ideas should be allowed to continue to grow, without removing the physical aspects that make the activity interesting.

The Lee Felsenstein Magic

Lee has hit a sweet spot here. With all the excitement about the Raspberry Pi (which I will save my criticisms of as an educational tool for a future article), Lee's project should have that sort of excitement going for it. This is about students building their own processor. This knowledge is important. This is what the people who caused the microprocessor revolution used to cause the revolution in our lives. This is the knowledge that put a CPU in your telephone, your oven, and your iron. This is what tunes your radio.

Assembling a processor from random logic is a huge project. Yes, people still do that (I've even build a very, very simple one from racks of relays, myself, under cover of testing those relay racks and their support wiring after installation.) Building your own processor with a PLD is a lot easier, once you understand the building blocks.

Lee explains himself well on his project page. Have a look. I will be following the progress of the project.

And I'm really glad I got a chance to meet up with Lee again after all these years. He was one of my mentors and inspirations in my youth, just as he describes those who mentored him. It seems to be a common thread that those of us getting older want to assist the younger generation just as we were assisted when getting started in technical pursuits (as hobbies--the jobs came later.)

And if you're raising a kid--don't just foist off software on them as something to play and "learn" with. Software isn't reality. I've designed any number of computers on paper and in software, and then go on to build far fewer of them. Because software and paper aren't the real thing. The real thing has all sorts of little niggles and oddities that you'll never learn about in any way other than doing the real thing. Teach your kids to solder, use solderless breadboards, and use real components at all levels of complexity. Don't try to do too much at once, start with kits then move your way toward recreating circuits on breadboards then to soldering them on prototyping boards.

But do the real thing. Right alongside your other crafts projects. Because electronics is just as much a craft with some useful products as is crochet or embroidery (both of which I do) or quilt-making or sewing (which some of those close to me do). And most of all, have fun!

Tuesday, November 12, 2013

Ted Nelson & Computer Lib at Homebrew Computer Club Reunion

I had a great time at the Homebrew Computer Club Reunion last night, which, I learned, was made possible by a Kickstarter (thank you, KS backers!)

One of the great conversations I had there was with Ted Nelson, author of Computer Lib/Dream Machines and his wife, Marlene Mellicoat. My wife and I had a wonderful time speaking with them. Ted has published a new edition of Computer Lib. It's not a reprint from scans of the original, but a new printing from the original negatives. It's as clear and sharp as the original was back when, possibly even better. It's in the same large format, as well, not scaled down for the size of paper that happens to be cheap and convenient for most books.

Sorry about the fold, I only had my hip pocket as a place to put his flier last night.
I was working so hard at being social last night it didn't even occur to me that I could probably have purchased a copy directly from him right then. I saw that he had a number of copies in his bag, too. It's little things like this that I always think of when people tell me how smart I am. Yeah, about some things, maybe, but about other things I'm not so much.

Nevertheless, I'm going to purchase it now, after the event. I read someone else's copy back when, having noticed it as a pillar on a bricks and boards bookshelf among a number of copies of The Fabulous Furry Freak Brothers (Fat Freddy's Cat was my favorite of the crew.) Now I'm looking forward to having a Computer Lib/Dream Machines book of my own.

If you're not familiar with Ted's work, I strongly recommend correcting that. The web could be so much more than it is, and require far less human "curation" than it does, if it hadn't turned into the mishmash mess of information without proper structure that it has become. I'd say more, but rather than reading my take on what he thinks, go to the source:





Hopefully I'll have a chance to post more about last night's event in future articles. There was so much packed into so little time that my head is still spinning from it. (They managed to recreate the atmosphere of the original meetings perfectly in that regard.)

It was great that my wife got to hear Ted speak during the formal presentation portion of the evening, too. I got to hear him speak a few times back when, he's a dynamic and engaging speaker. He makes you think about how things could be, possibilities that are better than reality. Now we have hearing Ted speak as a shared experience.

Wednesday, August 15, 2012

Parallax Propeller + COSMAC 1802: the Saga Continues...

Monday morning I posted about hooking up a Parallax Propeller QuickStart Board to an 1802 Microprocessor. My hopes are to make the Propeller behave as a RAM for the 1802, primarily to be able to display a bitmap of that segment of memory as video. The idea is to make it compatible software-wise with the 1861 Pixie chip (the 1802's native video adapter), which is presently unavailable.

Since then I've spent more time on this project. A lot of that time got tied up in side issues related to tying a Propeller to an 1802, as well as some time spent dealing with the test equipment I want to use to keep an eye on the circuit and see what it's up to.

First, here's how it looks tonight:

Propeller QuickStart hooked up to an old CDP1802 microprocessor.
The second stage of the Propeller/1802 mashup circuit. Click for full size.
I've got a closer relationship between the two processors now. The Propeller is actually doing something. Before, the 1802 was getting power and a fixed clock signal off the QuickStart board, but the Propeller was not itself involved.

Clocking the 1802
One thing I put a fair bit of time into (too much, that is) is using the Propeller to control the clock of the 1802. There are a few reasons I think this would be nice:

The clock can be varied in software depending on the user's desire and the individual 1802's capabilities.
At some later point, the Prop can have a greater degree of control over the 1802's operation in general. With the ability to stop it, start it, and even single-step it.

On my first stab at this a couple of days ago, I just threw a repeat loop into the Prop's code to pulse an output line. I didn't have very good control of the rate, I didn't have the constants set right, and it didn't work right off the bat so I decided to drop it and just use another clock.

Let's Try a Timer, Because...That's What They're For
This time I took time to read up on using the counter/timers on the Prop. Each cog has one, and I played around with it for a while before feeding its output to the 1802's clock input. After a while I was able to get the results I wanted, and was able to fine-tune that bit of the code.

Then I hooked it up to the 1802 with a frequency of 1.25MHz...well, I should say I hooked it up to a 4049, passed it through a pair of inverters to square it up a bit and buffer it. Then I passed the signal to the 1802. Which ran just fine.

At this point I still had the data bus of the 1802 hard wired to a $C4, which is a NOP instruction for the 1802. That way I can watch LEDs on the address bus and see that the chip is running through its address space. The pulse rate of the high order LED gives me an immediate idea of how fast the 1802 is getting clocked, too.

Then I started playing with the constant for the counter's frequency to see what speeds this 1802 chip would be good at with a 5V Vdd and 3.3V Vcc (the 1802 can use split supply voltages for its core and I/O. Not bad for 1976, eh?)

Overclocking! 4.8MHz Baby! Yeah, uh, Megahertz.
This one ran up to 3.6MHz without a problem. I didn't run it up until it stopped, the other day I think I had it running at about 4MHz when I was playing with the repeat loop. Above 3.6MHz it seemed to start running hot, so I stepped it back down again.

Above 3.6MHz it was noticeably warm after a while, and seemed to be getting warmer. Between 3.2 and 3.6MHz it was warm, but maintaining its temperature just fine. At 3.2MHz it was solid as a rock and the heat was barely detectable to a calibrated fingertip. If I can get it to run at this speed reliably with the Propeller providing video then popular Elf software like Tiny Basic, Forth, CHIP-8, and the programs written in them are going to rock on this system.

Edit: I've run it up to 4.807MHz now, and it didn't seem all that hot this time. It really shouldn't have any problem with heat at these voltages, it normally runs at 5MHz at higher voltages. And now it doesn't seem to have a problem. When I tried to take it over 4.8MHz it ran sometimes but not others, or hung occasionally. So this is the limit for stable operation at this voltage for this chip (an RCA CDP1802CE.)

Getting Data to the 1802

OK, so back to the problem of making the Propeller look like a RAM. Frankly, when I did my first look at the timing involved, I figured this would be completely trivial to implement. Push in data and address bus wires, connect up /MRD (memory read signal) from the 1802 to the Prop, crank a little Spin code, and I'd have the Prop acting like a ROM.

Put in /MWR and a little more code, BAM, the Prop is a RAM.

Hook up TPA to catch the high order address byte, tweak the code, and the Prop is ready to map more than 256 bytes. All I'd have to do is add the video code then figure out how much Prop RAM was available then expand to fit.

Well, it wasn't that easy. If it had been, I'd be playing with the video now.

I tried to leap forward at first. I plugged in the address and data bus wires, plugged in /MRD, and added some code to my clock driver program. It had a 256 byte array that it set up and initialized to $C4 in every position (NOP again, but this time it would be a soft NOP rather than a hard-wired one.) Then I wrote the program to wait for the Propeller I/O line I'd selected for /MRD to go low. Once that happened, it would get the address off the address bus, look up the appropriate element in the RAM array in memory, and put it out on the data bus until /MRD came back up again.

Easy-peasy, right?

Well, it didn't work.

Oscilloscope signal trace of one of the data lines and /MRD on the Propeller/1802 mashup.
Top: Data from Propellor pretending to be a memory. Bottom: Memory Read Request. Result: Memory Too Slow! (Click for full size.)

When I was doing my timing calcs and looking at the 1802's leisurely timing--it doesn't mind waiting just under a millisecond for its memory to come back when it's running at its normal speed (about 1.78MHz.) Hey, an 80MHz processor could be running a version of interpreted BASIC written in LISP and keep up with that, right?

I guess not. At least not the way I did it.

I clocked the 1802 at 1.25MHz for the test. This is an easy, slow speed that the Propeller can generate easily. I figured it'd work, I'd push up the frequency to 2.5MHz, check timings on the scope, and discover that I couldn't make the 1802 go too fast for the Prop.

Unfortunately the trace above shows different.

I Can Do Lots of Things Wrong, All at Once
I'm sure this is the result of something I'm doing improperly. I can think of several possibilities already:

I'm using WAITPNE and WAITPEQ to respond to the pin change. Perhaps, because they do some power-saving activity on the cog on which they're invoked, they just aren't meant to respond this quickly. Maybe if I just go to an active polling loop I'll be OK?

I'm using SPIN, an interpreted language. Perhaps I need to just insert a bit of Propeller assembly to tighten up the timing?

Perhaps there's some option or configuration thing I've not done?

Maybe this cog has to do something active with the timer/counter (I'm using the same cog as the one that's running the 1802's clock.) Perhaps if I move the RAM functions to another cog I'll be fine?

Those are just what comes off the top of my head. I'm far from being an expert on the Propeller, and I haven't picked up the books on it yet, just the free manual and datasheet downloads. But things like this create an opportunity to learn.

I also had a few other things I needed to figure out on the way, minor little things like the order you mention the output pins in when writing your code statements (I was getting $23 out instead of $C4 initially. Yeah, oops.) That and the whole thing is a rather fragile lash-up right now. I'm going to go to solder as soon as I get the RAM moving data to and from the 1802. But not before, because I hate rework, and if I don't test it on a breadboard first, there'll be rework. And this lets me do a few things like change which lines I'm using for address and data easily. I moved the data lines to P16..23 so that I can watch the data on the QuickStart board's LEDs, like a little front panel. Before, I had the address lines here, but I've already got LEDs on the breadboard showing me the address line states. So a quick change of a few constants in software, and I get a data display with no re-soldering.

Slightly Off-Task Tasks
I also took some time out to search out a line cord for the new frequency counter I got at the Ham Radio Club night before last that didn't have one (found one, after searching high and low. It's an oddball, not an HP cord.) And I checked out the two portable NonLinear Systems oscilloscopes I bought, too (one works, one doesn't, as expected.) The frequency counter was very nice to have while I was figuring out how to use the Prop's counter/timer as a numerically controlled oscillator, so the time was well spent even if it did eat into my Propeller-as-RAM time.

Looking Forward to Round 3
I'll be back at this before the end of the week (I'd like to tomorrow, but it'll be a busy day so I may not be able to.) If you have any suggestions or salient experience, your comments or emails would be much appreciated!

Edit:

I've had a look at the Propeller docs. It looks like the timing of hub instructions is my problem. I may just insert WAIT states for the 1802 and see what that does.

Monday, August 13, 2012

Parallax Propeller + Cosmac Elf = ?

I've started working on one implementation of an idea I've had for a while...

There's this neat old 70's computer system called the COSMAC Elf. It's like a lot of the microprocessor trainer systems of the time, but it's got some unique abilities that make it a bit more interesting to build, and expand on, than some of the others.

Video output being one of the biggies.

Step One
Before I go further, here's a look at an early step in implementing my idea:
An 1802 CPU on a breadboard connected to a Propeller Quickstart board for power and clock.
A first step: Power and Clock from the QS Board to the 1802. (Click for bigger image.)

Here I've gotten to the point of getting clock and two levels of power from the Quickstart board to the 1802 CPU. First, I had a crystal oscillator circuit providing a clock to the 1802. Only 5V power came from the Vin on the Quickstart board.

Then, I split the voltages. The crystal oscillator, the inverter for the clock signal, and Vdd for the 1802 were split off with 5V power, and the rest of the circuit was put on the 3.3V power from the Quickstart board. At this point, I'd been running the 1802 at 1MHz, slow enough I could easily watch the LEDs on the address lines changing as it ran.

Then I moved up to a 2MHz oscillator. The 1802 was still good with that, with its Vdd at 5V and Vcc at 3.3V.

Then I tried to get fancy.

I took an output off the Prop and tried to use a repeat-wait structure to clock the 1802. It worked, up to a point. But I got to where I was unsure of my actual frequency, and the 1802 stopped running at a slower speed than I expected (I thought.) In fact, I was getting too clever and messing myself up. Looking back, I was probably somewhere above 4MHz when the 1802 refused to run any more!

After a while I realized that, and just decided to put the crystal oscillator back in.

Then I took another look, and decided that a 5MHz XI off of the QS board could be used as a clock base. A 2.5MHz clock would be fine (actually, anything from 1.76MHz on up would be fine.) Most Elf computers run somewhere around 1.76 to 1.79MHz to accommodate the clock for the video IC they use. Getting at least that speed is pretty much a must for me to feel like this project is going where I want it to. But getting a faster clock would be even better, as we'll see.

First I dropped in a CMOS part--a 4013--to act as a divider for the 5MHz clock to drop it to 2.5MHz for the 1802. I forgot that at 5V the 4013 only really works up to 4MHz on its input. So that turned out to be a waste of time.

Then I dug out a small supply of 74AC74 ICs, which work fine at well 5MHz and above. It worked fine, dividing the 5MHz down to 2.5MHz. In fact, just to be a little conservative, I used both flip flops to divide the clock down to 1.25MHz, then ran that to the inverter I'd had the crystal oscillator going to.

That worked, so I tried the 2.5MHz output. At first the 1802 wouldn't run, I pushed the Reset switch, and noticed the clock took off when I bumped the Ground wire. Once the ground wire was back in its place securely, the 1802 ran just fine at 2.5MHz.

Then I bypassed the 4049 inverter, the signal from the 74AC74 is plenty strong enough to drive the 1802 by itself.

That was step one. Time for a break before step two.

The Plan
So why hook up an old CPU to a fast modern CPU like the Propeller?

Because of a problem with getting chips for the old Elf computer.

People still like building the old Elf computer. It's a complete computer system that can be built in an afternoon if you've got all the parts and tools at hand (I built my first in about four hours.) It's a computer that pretty well exposes all of its parts to examination, so it's easy to learn how it works, and to understand all the bits of the system.

The video IC, called the CDP1861 Pixie chip, is one of the simplest video ICs ever made. It's basically some timing and control signals wrapped around a shift register that works with the DMA mode of the 1802 to produce a really nifty one chip video interface.

It's not exactly workstation graphics, being monochrome with resolutions ranging from 64 x 32 to 64 x 128. But it does the job. People program the system using this quality of video. And you thought the Vic-20 was low-res!

Well, the problem is that in the past few years Pixie chips have pretty well become Unobtanium (a term that goes way back before the movie Avatar, by the way.) In other words, you can't get 'em. There have been a couple of less than optimal replacements (from the perspective of new ELf builders who have to make them up themselves rather than just buy one pre-made.)

I'm trying to come up with something slightly less suboptimal. And solve a problem that the Pixie chip has.


The Third Cycle...of DOOM

The Pixie's problem is that its timing only deals well with 1802 programs that use instructions that take two instruction cycles or less to complete. There are two instructions, Long Branch and Long Skip, that take 3 cycles. They create jitter in the video by throwing its timing off.

Since the Elf's video is basically just a straight bitmap of a chunk of memory on the screen (the lowest resolution, 64 x 32, is a map of the Elf's base 256 Bytes of memory straight to video.) So, if some other circuit could just read a relatively new, fast RAM in the time when the 1802 isn't reading it, then the other circuit could just pull the data then run it to video, and leave the Elf none the wiser.

That way all the Elf has to do is manipulate the data in the section of memory on the screen. And a 3-cycle instruction won't cause any problems. And the 1802 gets about 40% of its time back, during which it would otherwise have been doing DMA of that memory data to the Pixie chip's shift register.

So:
Video that doesn't need the unobtainable Pixie,
No 3-cycle instruction timing issues,
No loss of 40% processing time to video DMA, and
Able to run at higher clock speeds.

Sounds like a winner, right?

Implementation Details

Next was how to actually implement it. I've looked at several ways, with various advantages and disadvantages.

Using faster RAMs was a first building block I looked at. For example, a static RAM pulled out of a 486 motherboard's cache RAM would be more than fast enough. Both 20nS and 15nS are readily available. Plenty fast to grab a byte once every 1802 instruction cycle for the external video system.

Then, came my initial thought, maybe use an Atmel AVR microcontroller to do the grabbing, put the data into its internal RAM, use that as a frame buffer for some bit-bashed monochrome video. No big deal.

No big deal if you've already got the ability to program AVRs or are prepared to supply them preprogrammed. Still, not a bad solution. Just not likely to be popular as I'd like because of the hurdle of programming the chip. The idea wasn't just considered with an AVR, any of a number of uC families could work. But they had the same problem.

Another idea was to build a circuit from random logic. Not as appealing, with my schedule, but if I could be the pioneer on this and put something together then anyone could order the parts and start wiring. It would probably add about 1/3 to double the work to assembling an Elf computer. Again, not perfect, but possible.

Then I thought about taking the AVR idea and putting in a Propeller board. The advantage here is that, rather than getting a bare microcontroller and having to get the infrastructure to program it to do the One Job that that user may ever use it for, the board itself is all the infrastructure needed. (Yes, Arduino occurred to me, too.)

A download on a computer, a USB cable of the correct type (I'm using a cell phone charging cable), and you're in business. Even if the user never does anything with a Propeller again (what an unfortunate thought!) then they wouldn't be out anything but a bit of their time to get the "part" they need programmed for the job.

And it's less time than hand-wiring random logic on a perf board, no matter how you look at it.


It Just Keeps Getting Better

So, I started with the idea that the speedy (80MHz) Propeller would have no problem sneaking in and reading bytes out of the Elf's RAM during those long, lazy ~200nS slack periods. Then it could put the data into an internal frame buffer.

And, what hey!, the Propeller has built-in video. I could make it so that the final Propellor program puts out Composite baseband, Composite broadcast, and VGA all at once. What a deal! The user doesn't even have to pick between different programs based on what sort of video they want right now.

Then came the next idea. Replace a chunk of the Elf's RAM entirely. Let the Propeller be a chunk of RAM.

It can't replace all the possible RAM of an Elf in its present version. It's only got 32K of RAM in it, and it needs some of that for any applications it's running.

But, it can replace a chunk of the Elf's RAM. Enough for Pixie-quality video. And with some lines used for control, the video resolution and frame buffer location within that memory can be changed. I don't know yet, but it seems like mapping somewhere from 2K to 4K wouldn't be that difficult.


Pixie Compatibility

At first, I'm going to concentrate on a limited memory map, and on replicating the basic Pixie mode of 64 across by 32 high. In spite of the fact that there won't be any actual DMA transfer needed, it should be able to display video from the basic Pixie video programs like the iconic Starship Enterprise video from 1977 without modification. They'll just run faster as a result of no DMA overhead. I think.

Then looking at the exact details of how an expanded Elf uses the other modes (if it does, I've never done so myself) will let me look at expanding the Propeller's memory map area and responding to the Elf's control to do that.

So, if I can do what the Pixie does without requiring DMA, I'm already getting a system that's about 40% faster even if I don't move up the clock speed from ~1.79MHz. (The 1802 was the original overclocker's chip, way back in the 70's, but that's another story.) If I can increase the clock speed even more, with no effect on the video (since a hard-clocked Pixie chip isn't there any more), then I've got a system that'll run such things as Chip-8 and Tiny Basic that much faster than an original Elf with a Pixie.

If the 2.5MHz setup turns out to work (I see no obstacles at present, but that just means I haven't run into them yet), then I'm getting a system that should run about 2.3x faster than the stock Elf. It'll still be no speed demon (that's not the point), but it'll be nicer to use.

Next Steps, Baby Steps

I'm going to write a program for the Prop to make it pretend to be a RAM for the 1802. It will be a simple 256 byte memory. That avoids the multiplexed signals for the address. It'll (hopefully) receive and store data bytes from the 1802, and deliver data from its store on request.

The first pass is going to be really, really simple. I'm going to set up an array, put in a short program to blink an LED on the 1802's 'Q' output, and set it up to respond to the 1802's memory read requests. No writes required. If that works, then I'll add write capability and put in a program to test that (again using Q as an output.)

If that works, I'll proceed to give the 1802 some more sophisticated output and input capability and see where it goes from there. But best not to plan in too detailed a fashion too far ahead until the immediate problems are solved.

Thursday, July 19, 2012

8085 Monitor Code and Other Distractions

I've been working on an 8085 microprocessor project for about 3 years now. It started with a simple circuit on a breadboard, moved to my HP Logic Lab, where it became a real computer, then got built into a permanent version on a prototyping PC board.

I've been documenting the thing since it moved to the Logic Lab on my web site, posting how-to articles on building both versions (the hardware is identical, only the construction method is different), as well as software to control the hardware.

Where I've come up short is putting together a sort of unified OS for the system online that allows software to be developed right on the system itself, as well as any high level languages. I've been writing lots of software for my own use with the system, mostly hand-coded using my assembly coding forms and my 8085 Pocket Reference Card. But I keep either getting distracted from or otherwise dodging the job of integrating all the software bits I've already got into a simple "monitor" program (sort of a mini-OS for machine language) for the system.

Part of it is the usual life distractions. I've been sitting next to a wild fire this past week, for example. And then I've got lots of other electronic toys I like to spend some time with. Each one has its own appeal.

Before the fire, and a bit during (when I was taking a break from cutting ever more brush around my property) I've gotten back to work on putting it together. The biggest part is the part that reads the keyboard, determines the current system state, and dispatches keystrokes and execution to the right place. All the hardware interfaces are already in place, most of the basic system routines are in place (timing/delays/string handling), etc. So the "glue" is pretty much all that's needed. And I got it about halfway done before the fire started, I'm writing the code that actually takes actions for each mode, or simply hands over the necessary info to user apps running on the system.

So, if I can get time away from deck repairs on my house this weekend (now that it looks like it's unlikely to burn down or that I get evacuated), I'll be trying to wrap up and test that code.

Small Thing, Big Obstacle

The other thing I'm looking forward to is replacing some of the switches I put on the MAG-85. I put in switches for various interrupts about two years ago, and the switches themselves turned out to bounce and make so much noise that I've given up on them. No amount of debouncing, hardware or software, within reason has made them reliable. I'd hate to have someone else construct a MAG-85 and have to deal with this. It's been a thorn in my side ever since I added them, and took a lot of the fun out of the permanent hardware project for me (on the breadboard version, I used some old keyswitches out of a knackered Mac Plus keyboard, they worked great with only the most minimal hardware debounce. But I figured I could hardly specify 25 year old keyswitches in a project that others might want to build.

I'm expecting a shipment of a bunch of different switches tomorrow that I can test and select from to replace the awful switches I have now. So I can put that behind me (and probably re-simplify the circuit to take out a bunch of the extra parts I put in to try to deal with the noise on these switches.)

Frankly, the old switches are something I'd about hesitate to use in a doorbell circuit, never mind real electronics.

Friday, July 13, 2012

Propeller: Things That Make You Go Hmmmmm...

Propeller quickstart board and a telephone rotary dial...do I feel a project coming on?

P8X32A Propeller Quickstart Board...Telephone Rotary Dial....Hmmmmmmmm.

Monday, January 16, 2012

8085 Resurgent: Back to the MAG-85

I took a look at my own website the other day and realized it's been far too long since I have updated certain items there. Most noticeable to me was my MAG-85 project, an 8085-based micro trainer. There are a bunch of things I thought I'd posted over a year ago, but the information isn't there.

Obviously I never did it. Sorry about that.

The front page image for the 8085 project is was a really ugly thing I took at a very interim stage while I was doing regular updates on the project. If I wanted to scare someone off the project, I think that picture is what I'd use. What a rat's nest!

8085 microprocessor computer project SDK-85
A Very Ugly Looking 8085 Computer

Shortly after that picture was taken I built a real front panel and enclosure. It's been happily living in that enclosure for well over a year, but I never posted the info on it. In fact, I've just started taking it apart in preparation for making some improvements. Since I like to take pictures of my work for documentation purposes (like getting the right connectors back in the right places), I took some photos of the partially-disassembled unit as it is before I make the updates. Here's one:

MAG-85 8085 computer project in (and partially out of) its enclosure.
A bit ugly with the top and bottom panels off, connectors and wires trailing out, but not so bad at the first photo.

Here you can see that it's not so ugly as before. The LED to the left of the LCD display is controlled by the 8085's SOD output. The eight LEDs below the display are controlled by the 8085's OUT 01 command and held in their state by a register. The eight switches below that are on the 8085's IN 01 port.

The program that's running currently reads the position of the switches and outputs a byte to the LED bank to match what it sees on the input. It also reads the keyboard and sends the ASCII + 0x30 character to the screen that it reads from the keyboard (which is in IN 00). The LCD display is on the OUT 00 port.

The four push buttons above the keyboard are, from left (red) to right:
RESET
TRAP
RST7.5
RST5.5

In the crude OS/monitor I have running on the MAG-85 now, TRAP acts as an "Escape" key that returns control to the OS. This allows miscreant programs to be stopped and memory examined any time, since TRAP isn't maskable.

RST7.5 is used as the user vector to the start of the application program in memory. In essence, this is the "GO" or "Execute" button.

RST5.5 is also a user vector that should point to a subroutine that does something and returns. Either the application program should initialize it, or it has to be initialized by hand in the monitor.

The keyboard itself uses RST6.5 to read a key value into a buffer, where an OS routine can pick it up/translate it, etc. The keycap legends allow for several uses of the keys. The typical use of the yellow keys is hexadecimal number entry. But they can also be used as arrow keys and fire button (9) for games.

The top row has the IN or Enter key in red, the backspace (BK) in blue, the (M) mode and edit/view (e/v) keys in gray. The edit/view key toggles a flag in the OS that switches between a memory protecting mode (view) and editing mode in the various modes. The mode key modifies the mode variable in the system to switch between Memory, Register, and I/O port viewing or editing. It's possible to change from editing to viewing and back again while in any of the modes.

Current Rework
My current plans for reworking the MAG-85 have to do with replacing the buttons used for RESET, TRAP and the RSTs, plus improving the ergonomics of the unit a bit by replacing the top and bottom panels, which were hand-made on hardboard, with some nicer CNC'd panels that reposition some of the controls (I switched off the unit more than once when I meant to switch on or off the backlight for the LCD.)

The four pushbuttons above the keyboard are some really awful buttons. They ring like bells, causing all sorts of debounce problems. I have both hardware and software debouncing on them right now, and it *mostly* works. I think part of why I stopped posting before was that I wanted to kill this problem before I posted, so as to avoid causing anyone else the headaches I've had with these switches. And, as you can see, those switches are still in there.

A sideline on the current work is also preparing for a couple of improvements that have been planned since the beginning, but haven't happened yet. One is to put a nice little door in so that the NOVRAM/EEPROM/EPROM memory can be swapped out without opening the whole unit. Right now I unscrew the end pieces then pop out the face panel to get at the memory socket. Which is not clean, quick, or easy. I have to get the cables all to go back to their places each time I put it back together. A couple of times I've pulled out one of the input cables by accident, then wondered why the keyboard or switches aren't talking when I get it back together.

The other is preparing to mount a battery pack inside, so that I can go cordless with this thing. A lot of what I've been doing in design tweaks is reducing the power used by the system. Finding a good brightness for the backlight, putting a switch on the backlight so that I can turn it off when I'm in good light, reducing the brightness of the power LED and the I/O LEDs, that sort of thing. As well as looking at my nascent OS to see if there are things I can do there that will reduce power while staying out of the way of the user's programs.

I'm also planning on adding another memory socket for an EPROM. It won't have a ZIF socket in it like the one that's there now, but I'm getting to the point of wanting a more permanent memory for the OS, with the NOVRAM left for user space programs. This may involve some rejiggering of the memory map, since for mechanical reasons I'd like to have the removable memory in the center of the PCB side to side.

I'm also looking at adding an expansion port or two on the new end plates, which begs the question of whether to use a standard connector with a more or less standard wiring for the port (like a PC's bidirectional parallel port) or whether to roll my own for the sake of less constraint on how the lines are used. Basically just bringing the lines from the I/O buffer registers straight out of the box along with power and ground. This is my preference for a number of reasons, but there's an appeal to letting the MAG-85 drive standard I/O devices, too.

The addition of a serial port, either at TTL levels or a standard RS-232C port is another possibility I keep playing with. I've kept SID available for this possibility, though I was tempted to use it as either a user digital I/O or a memory bankswitch I/O or something of the sort. Being able to connect the MAG-85 straight to a terminal would be really nice, so I'm leaning toward an RS-232C port.

But if I do that, I'll need a separate set of I/O routines for that port that allow it to be the primary user I/O for the OS. So if I do that, it'll probably be deferred until some other things happen first.

Like finishing the OS well enough to post it.

Finishing the MAG-85 Operating System
I haven't posted the OS yet because it's still in a fairly yucky developmental state. That and my test hardware still has the crummy switches that do nasty things to me at times, and there's lots of code in the OS just to try to deal with them that can probably come out once I've got decent switches in.

So my present software objective is to clean up the OS and put a bow on it so that I can "ship" it to a download on my site. I may end up cutting some of the features, but there's also the possibility that I'll end up cleaning them up because I've already got too much other code running around that uses them. The core basics are the ability to view and edit the system's memory, and execute a program starting at some given address. I'll guarantee that much.

I think the ability to view and edit register values is pretty well sewn, too. It's possible to bork the OS by doing something stupid here, but I'm not trying to protect the user from being stupid. Press TRAP (probably to be labelled ESC) and the OS will restart and reset any values it needs to function (I hope!)

The ability to view inputs and edit outputs interactively shouldn't be a problem, either, but it's not an immediate priority. If it happens effortlessly, it'll be in the initial release. Otherwise, I won't hold up the release for it.

I've also got another mode that is enabled in some versions for viewing/setting user variables in the OS, such as the RST5.5 and RST7.5 vectors, selecting different display formats, setting the size of the LCD, and so on. I can pretty well guarantee that the full-up version of this will not be in the initial release, though a cut-rate version of it may be. That would mean that you would set these variables yourself when putting the code on your own MAG-85.

If I do go to a design that uses two memories, one rewritable in system and the other not, I'll have to put these variables in RAM, or expect that they be set once and for all in the firmware. I'd like to be able to provide an EPROM at some point for people who want to build a MAG-85 and get up and running without having to put the OS in themselves. But if I do, I either need to put some system information in the NOVRAM memory space (like the height/width of the LCD display) or require that only one size of LCD be used with a specific version of the OS. Since I'm looking forward to building another MAG-85 with a larger display (perhaps 32 characters wide by 4 lines high), I'd like to keep the OS flexible. The initial display routines will only use a portion of displays larger than 20x2, but the user programs can use the larger displays and the OS can be updated later to have multiple display formats that the user can select.

But right now, I just need to drive a stake through its heart and get a workable version out the door. :)

Tuesday, December 6, 2011

A Neat Modern Retro Computer: The New Elf II

My first computer was a version of the COSMAC Elf computer. It was a simple little computer, costing about $100 in 1976-77. The joy of it was that is was a real computer, and yet it was direct enough in its design philosophy that every single thing about it was understandable and controllable.

I wasn't the only one enchanted by this small computer system, or to have it as a first computer.

Marc Bertrand has built a new, modern version of the Elf computer, visible at his web page. It's not only got the original video interface of the Elf II, but a nice LCD display driven by a PIC microcontroller. It has heaps of I/O ability through several interfaces, nicely controlled by some Altera programmable logic.

Have a look, it's a sweet little system with a layout very reminiscent of the original Elf II computer.

This isn't it, but it's a little test circuit for the 1802 microprocessor used in Elf systems that makes a nice LED blinky:
RCA 1802 test circuit that happens to make a very nice little LRD blinky
A simple test circuit for the RCA 1802 processor used in the Elf computers

Wednesday, November 9, 2011

Low Level Computer Teaching Options

We have a current discussion on the COSMAC Elf Discussion Group that centers on the idea of a small computer to teach low level computer concepts. Many of us in the group got our start with the COSMAC Elf as our first home computer. It is a small, simple, inexpensive computer. One of its finest points is that it is simple enough that a person of ordinary intelligence can understand how every part of it works, down to the lowest detail.

The place for a small teaching computer, as we're discussing it, lies somewhere between electronics and the standard non-computer science introductory computer programming class. It's a matter of teaching what the components in the system do, and how they do it. This becomes a model of what happens inside more powerful modern computers at larger scale. Such as in current desktops, laptops, tablets, and smartphones.

COSMAC Elf single-board computer with PIXIE video. A complete computer system in 1977 with only 13 integrated circuits!
The COSMAC Elf, this version includes video graphics.


Is anyone using something along the lines of a microprocessor trainer in the classroom today outside a college level EE class?

Personally I can see two general approaches to this, with several possible variations on the two themes. Let's look at them, then I'll go into Blue Sky mode to talk about what I sort of wish for.

Some Ways to Bring Computer Hardware into Class
One is to fake it entirely with present-day hardware. After all, if it's possible to do a complete chip-level simulation of an 8-bit processor in Javascript, it shouldn't be much of a stretch to simulate an entire simple 8-bit microcomputer in a program with the ability to "see" all the operations inside simulated on the screen.

The problem is that this still really fails to make what's being taught "real". To the students, it becomes just another show to watch--one with no particular interest to most of them.

The other approach is to use an actual old microcomputer in class, like the Elf, with the students handling the system, measuring voltages or using logic probes to "see" the signals in the computer. Something more sophisticated would be using chip clips with LEDs on the various lines as a sort of multi-line logic probe. (Here is a place where an Elf or other RCA1802-based system would shine. The 1802 is a fully static processor. It can run at clocks speeds from 0Hz on up to its maximum clock speed, with clock changes on the fly. I have literally clocked 1802 systems by hand by connecting and disconnecting the clock line to +5V and Ground lines, counting out machine cycles as displays show the status of various system lines. There are not a lot of computer systems that can do that!)

An annotated image of a COSMAC Elf computer, showing the location of CPU, memory, and other ICs.


Between these two lie many other options. One would be to have a hardware board that connects to a modern computer through a common interface, like USB, where some I/O devices could be visible controlled by the computer (via lights, motors, etc.) and with lines exposed that can safely be probed by the students.

Another would be using a more modern hardware platform, perhaps based on one or more microcontrollers that emulate the function of an older system, exposing such things as memory access, control signals, and so on to the students. The board could include displays and LEDs to show the status of the lines, internal pseudo-registers, and so on. The operation of the entire system, both inside and outside the simulated ICs, could be made available to the student's eyes.

Part of what needs definition is the acceptable limitations of the system. In my own case, I see such a system as being an introduction to low-level hardware operation and control of that operation through software.

Blue Sky Dreaming
If I could have what I wanted without any effort on my part or a significant amount of the school's money, here's what I'd like:

Step 1
I would want to introduce a basic system that's very similar to the original Elf of 1976.

It would have:
  • Toggle switch inputs (to associate signals with data and to help teach binary),

  • A binary LED display and a two-digit hexadecimal display,

  • Very limited memory (about 128 to 256 bytes)(to teach how much can be done in limited memory, and to limit the size of early programs to sane sizes.

  • Exposed memory and I/O lines, possibly with LED monitors

  • Extra monitors, like maybe dual color LEDs to show data direction on I/O ports, etc.

  • A simple machine language with whole-word mnemonics.

  • The ability to operate at extremely low clock speeds (0-100Hz) as well as higher speeds (1-10MHz or something like.)


Step 2
  • Hexadecimal Keyboard

  • 512B to 1024B of RAM

After the first few lessons, the toggle switches would get old and I'd want to introduce a hexadecimal keypad. This would teach hexadecimal, and continue the association of computer instructions with numeric values in the computer. Presumably the connection between signal levels and numbers has been made using toggle switches.

With the easier input technique, it'd be nice to add some more memory, up to something like 512 bytes to 1 kilobyte.

Step 3
  • Keyboard with instruction mnemonics and hex digits

  • Perhaps more memory, up to about 4K

Next, a keyboard would be attached. Perhaps writing software to interface the keyboard to the system would be one of the Step 2 projects. While I'd be tempted to use an ASCII keyboard, I think a raw matrix keyboard would teach more. On this keyboard, machine language instructions and hexadecimal numbers would be mapped to each key. This would again speed programming, and reduce errors. The simple machine language I envision has a particular addressing mode associated with each mnemonic, so there's still no assembling of code required.

Step 4: A larger step
Next, I'd move to a more abstract level. I believe that the activities prior to this point would teach low level operations well enough to take this jump and still be able to show the connection between the two.

For step 4, the computer would get:

  • More memory. Anywhere from 4K to 64K. Perhaps it would start at 4K and grow as the students hit the limitations of each memory size.

  • A terminal connection to a current generation computer for keyboard and display, or an encoded keyboard and some other form of text display.

  • New firmware (probably activated from on-board with a mode switch), which would provide a fairly sophisticated command line interface with command editing, recall, etc., as well as an interactive programming language. The specific language doesn't matter too much, it could be a BASIC, a bash-alike, a LOGO, or an interactive form of some other compiled language.

  • Mass storage. Probably some modern semiconductor memory.


The point at this step would be writing high level programs to perform low level actions like those seen in the earlier steps. Seeing line levels and I/O operations performed, using bitwise operators, seeing the signals represented as numbers of various bases within the language (which I'd expect to support at least binary, hex, and decimal for representation and constants.)

Step 5
The final step with the low level computer would be to produce more sophisticated programs. These would be longer programs, probably projects done by groups of students over a few weeks in class. At this point the understanding of the program control structures and data structures should be a bridge to programming in the chosen language directly on the modern computer.

Final Thoughts
These thoughts are somewhat half-baked as they stand. I or someone would have to do some more work to really define this and turn it into hardware and software and a curriculum to go with it. Some points that need considering are the demarcation between this and a robotics class, common in many schools now (including the one at which I teach.) Also, how much class time does this merit? And so on.

Personally I think that using a micro trainer level system is simple enough to be mastered by most middle-school level students. I've got some actual experience with students to back that up, in addition to my own experience (I was 14 when I constructed my own Elf.) For the students, the information not only gives them an understanding of the underlying technologies of current systems, but would open the doors to embedded systems, far more common than conventional general purpose computers. Either way, it would make the computer far less a piece of technical magic controlled by somebody else and far more something comprehensible, and therefore controllable, by themselves.

Some related work--a 4 bit TTL Processor.

The fact is, all the steps above would probably be unnecessary and involve too many changes to the hardware platform to be practical in class. A more reasonable approach would probably be to go from a slightly more capable Step 1 computer directly to Step 4. This would reduce the opportunity for student disorientation as a result of seemingly constant hardware changes, and still be enough to get the key points across.

The activities I envision for Steps 2 and 3 could be either dropped or performed in either the initial or final configuration of the system. This would also simplify the system itself.

Monday, September 5, 2011

Motorola 68000 "Art"

I got an email message today about an image I posted some long time ago of a poster for the Motorola 68000 processor. The original image is a black and white test of image detail with my then-new cell phone.

Well, looking back at that old image is downright embarrassing. So here's a better one, and some other M68000-related images to go with it.


Motorola 68000 promotional poster with chip die image.
My poster from the Wescon conference after the 68000 was introduced. It came with a 68000 die tacky-glued to the poster, which I've since misplaced.


M68K Family Microprocessor User's Manual
The M68K Family User's Manual. Hardware Design Stuff.


MC68000 User's Manual
The older user's manual just for the 68000, this one's programming info.


HP 9800 User's Manual for M68000 CPU
Another 68000 User's Manual, from HP, pretty much the same as the above, but smaller and with an HP label on.


M68000 Cross Pascal Compiler Systems Guide
The Pascal Cross Compiler's Manual for the 68000


68000 Data Sheet Front Page, 1979
This is the front of a pre-release M68000 data sheet I got at a promo event in 1979. It got me pretty excited, I can tell you!


The programmer's manual for the whole 68K family.
Cover of the software book that goes with the 68K family hardware book, above.

Wednesday, February 16, 2011

R.I.P. HFE Electronics, Sac's Electronic Part Source

I wrote a post about the HFE Electronics Store last year. They took over the former HSC Electronics store here in the Sacramento area. I drove by today, and the store was empty.

HFE Electronics. A Valiant Effort, but Now It's Gone.

It looks like they went out of business two or three months ago, based on their Facebook discussion page. I only got down there four or five times a year, but I usually stocked up on parts, tools, and supplies on each trip.

Unfortunately this leaves me without a local storefront supplier for electronics. Our local Fry's Electronics carries enough in the way of components and such to make up for what Radio Shack dropped from their line a few years ago. But the selection of parts like interconnects, switches, and prototyping components they had at HFE was far greater than I'd expect to find anywhere other than an old-fashioned electronics store.

I do buy a lot of parts online now, I'll admit. I have about two shopping sprees a year for parts from places like The Electronic Goldmine and BG Micro. Not to mention frequent buys of parts from regular suppliers like Jameco, Mouser Electronics, All American Semiconductor.

But there's a difference.

At a local storefront, the parts are in my hands today. Plus, I can see them and handle them. With items like switches this is extremely important. I can also buy parts in small quantities speculatively then come back the next day if they work out and I need more (I've done this several times over the years at HFE/HSC. I remember once buying over $100 more in components on both days than I had planned on buying on each of two consecutive days. As I recall, the main culprit was some displays one the first day, and some uP support chips the second.)

Another thing that's important about local stores is their accessibility to youngsters getting into the electronics hobby. I know that for me when I was young, and now my daughters, fishing through boxes of parts and imagining what could be done with them is very inspirational. A couple of the fellows at the shop I haunted in my teens (Wenger's Electronics in Walnut Creek, CA) mentored me on several of my early projects. Plus there was advice I picked up from other customers. Not to mention that Wengers had a rack of bubble pack parts from Jameco, where I got my first 8080A.

A Long Time Since

I've been shopping at what was HSC and became HFE since I moved to the Sacramento almost exactly 25 years ago. Back then, I lived just a few blocks away. I not only stopped by in the evenings (if I had to run out in the evening for something, hey, HSC was next door to a Circle K store. How convenient is that? Pick up milk and potentiometers on the same trip out!) They had great flea markets on the weekends. You could pick up most of an Altair or IMSAI for a song back back then, enough that a little TLC would get you there (I wish I'd picked up an IMSAI for the front panel.)

There were other electronics stores* in Sac back then, plus I was right at the north end of town so it was easier to get to places like Natomas and Del Paso Heights for those stores. Since I moved to the foothills, HFE has been where I go.

This is where I met Dave Baldwin. I followed him home to get a treasure trove of old computer and parts he was seeking a home for. It's where I picked up about half of my test instruments, in various states of operability, that I use regularly today. It's where I've picked up interesting bits for many projects, and bits that still inhabit my parts drawers awaiting future fun projects.

I'll miss it.
----------------------------------------------------------------------------

Note: If you know something else in the area that I don't seem to know about (I already know about Fry's and Radio Shack, obviously), then drop a comment or an email, please!

*I'm using the term "electronics store" to refer to what I consider to be an electronics store, which is not a place that primarily sells consumer electronics. We used to call those "TV shops" or "Stereo Stores". I'm rather perturbed by such places moving in on the term "electronics store" (though many old-line electronics stores were once TV stores or repair shops.)

Monday, February 7, 2011

Coding Forms for Hand-Made Programs



I've posted some of the coding forms I use on my website.

For me, hand coding and hand assembling software is pleasant and relaxing. It's like crochet or embroidery (I do both of those as well as programming.) Something as simple as a printed form makes it nicer, neater, and faster.

The forms of my own I've posted are generic assembly language forms. I've also posted a TI 960/980 assembly language form as well as some programming forms for the HP-41C, HP-67 and HP-97 that I use with all of my programmable calculators (like my HP-35S.)

Friday, February 4, 2011

MAG-85: 8085 Microprocessor Project Software Progress

With the hardware work on my 8085 microprocessor trainer project about 99% complete, most of my effort over the past month or so has been developing a simple monitor program (a sort of mini-OS) for it.

The biggest part of that OS deals with the user interface (go figure), specifically the routines for controlling the LCD. The most demanding of the routines have been done for well over a year now, ever since I finished testing the LCD display with the breadboard version of the MAG-85.


The Early MAG-85 with LCD Demo Program

When writing my own software while playing with the LCD (i.e. goofing off rather than writing code for the monitor), I've been writing lots of ad-hoc software to write to the LCD or control it whenever I wanted something more sophisticated than "write this character to the screen" or "send this string to the screen". Character out and string out routines were in my early test software. What I needed were things like automatic wrapping at the end of the displayed line and some terminal control commands.

Coding Rule: Make One to Throw Away

The display controller for the LCD has a larger "display" area than the LCD actually displays. Its buffer holds two lines of 40 characters. That's really nice on my other display which has a 40x2 LCD display on it. But the MAG-85 uses a 16x2 display. This means you can send characters to it, and it'll happily accept them and put them into the buffer in places that the LCD doesn't show. So you don't see over half of what you write. Or you need to keep track of how many characters you've written since the start of the line and wrap to the start of line 2 yourself. And so on.

In my first shot at a set of LCD routines for the system monitor, I wrote a bunch of stand alone cursor position tracking and movement routines. Then I had to invoke them from all over to keep track of stuff. And since the software had to do its own tracking, stuff would get missed, the LCD would get out of sync, and it became standard practice to re-home the cursor and display every so often to keep things from going off track for too long.

Plus the number of routines was getting out of hand, along with the size of the software. I'd intended the software to come in somewhere around a page in length, and I was at about twice that with only about half the software implemented! For comparison sake, a system that uses a hardware approach to display control (e.g. LED or other simple segmented displays with data buffers and select logic) rather than software like the LCD uses fits the whole monitor into about 1K of memory. I was looking at my LCD routines taking up as much room as the whole rest of the monitor.

Frankly, I felt that I was going the wrong way, and it slowed me down in getting the monitor done.

Coding Rule: Make Another to Keep

I decided to strip the code back to my original tight routines. All they do is initialize the LCD, send out a character or a command byte, or a string of characters. First, I dressed up my initialization routine by moving out some recurring code for delays into subroutines. This not only reduced the size of that routine, but made those subroutines available elsewhere.

Then I looked at my character out routine, and decided to put the intelligence about where the cursor was in there. After all, if it gets written this is where it happens. I built in an automatic wrap. It seemed to fit well, and it took a lot of heat off the user code. Suddenly things were looking a lot better.

Once I got to rewriting some of the other added functionality, it turned out smaller and simpler. It went so well, that I'm much farther along in implementing everything I want than I got with the old code. And at present I'm only just over a page in length, without having gone back to look for optimizations (repeated code and so on.) That'll come next, after which I'll be desk=checking my stack handling as I jump around between different routines.

But I'm much happier, and I've got a lot more to show for my time this time. It's nothing revolutionary, but there are times when it's a breakthrough just to step back and see the obvious.

Saturday, December 4, 2010

Retrocomputing for the Holidays

What could be better than an old computer under the Christmas tree? It's a fun, inexpensive gift. There's always something old under out Christmas tree. Among the larger items have been the Commodore 64 that I gave my daughters as their first computer ten years ago. It included a monitor and floppy drive, of course, and copies of the user's manual and programmer's guide.

They'd both already started programming in BASIC on my Apple IIe during the prior year. So they were ready for the C-64! I got myself a Softcard for my Apple IIe that same year.

Two years later, I added a Plus/4, since they were frustrated with the lack of direct graphics commands on the C-64. Peek and poke wasn't good enough. ;) That same year some friends gave me their Laser 128 (Apple IIc clone.)

Christmas Cheer with PA-RISC

Another two years pass, and they both got an HP9000/700 series Unix workstation. They spent Christmas break learning Bourne shell and playing with Neko under X-Windows. Yeah, these weren't 8-bitters, but they were fun old systems that ran plenty of network apps. I had a MUD on my Unix workstation, and they learned how to telnet in and play pretty quickly. Their typing improved dramatically.

I picked up an Amiga 500 and that became the family's Christmas present the next year. We hooked it up to our 36" TV and played music, games, and a bunch of demos.

The year after that, a friend gave me a Bigboard I system. CP/M, 64K, and two big 8" floppy drives. Wordstar and BASIC-80 heaven. Everyone gathered around to roast chestnuts over the power supply and listen to the disk drives churn. ;)


Sure, it's got iTunes. Does it do X-Windows?
That same year, my daughters got upgraded to Macs, G3 B&Ws. They were excited about getting a computer that would run iTunes, but first they made sure that all their Unix stuff would run as well. Once they were sure they weren't giving up the Unix command line and could port their applications, THEN they were OK with the upgrade.

Three years ago, I did some repairs to a Kaypro 4 to get it working again. Unfortunately I haven't figured out how to read and write standard diskettes with it yet. It's a souped-up unit, with an aftermarket ROM, hard disk, and floppy drives that include both double and high density units. It got set on the back burner in favor of some other projects, once they're finished I'll get back to determining if I've got all the right software to go with the ROM that's in the system.

Visiting Old Friends

Two years ago, I actually avoided adding any new hardware for the year. Instead, I spent time during the holidays pulling out several of the systems I have that have been a bit neglected and giving them some TLC then playing with them. The Apple IIe, my own C-64, another Plus/4, and the Big Board.

Last year was the year of hardware projects. I had a new 8085 computer on a breadboard (see the story at http://saundby.com/), and was preparing to move it to soldered circuit cards. I was also migrating my Ampro Little Board (a Z-80 system) from loose components on the table top to living in a box like a proper computer.

Bringing Up the Ampro Little Board

Catching Up for Christmas

This year I'm working to finish the non-breadboarded version of my 8085 and building up a Membership Card (1802-based computer, similar to a COSMAC Elf from 1976 but a lot smaller) in a decorative little Victorian-style case. I'll be posting that on my web page at saundby.com soon, I expect.

I'm keeping my eyes peeled for a Commodore 128 at the thrift stores, too. That'd really round out this Christmas year.

What's on Your List?
What retrocomputer experiences have you had for Christmases past, and what are you hoping for under the tree this year that's hopelessly "out of date"? :)

Saturday, September 18, 2010

8085 Disaster! (well, a minor setback, but I'm not happy about it.)

I'm in the final stages of assembling the permanent hand-wired version of my 8085 project, and I've run into a problem.

3 Switches Become 1 Big Problem

There are three critical switch inputs to the 8085 on the front panel. They are the TRAP, RST5.5 and RST7.5 signals to the 8085. Essentially, each tells the 8085, "Stop what you're doing and do this instead."

It Worked Fine on the Test Bench!

When I built this project on solderless breadboard I didn't have any serious space limitations, so I did full "debouncing" on each switch, to prevent electrical noise caused by the switch from sending a whole bunch of "stop what you're doing!" messages all in a row when the user just presses the switch once.

On the soldered-up permanent version, I decided to save a little bit of space on the circuit board by trying to debounce on the cheap. A normal debounce circuit uses two inverters to clean up the electrical noise. I had a chip on the board that had six inverters in it, two of which were in use. That left me four inverters, for three signals. I needed six to do the job right.

Too Clever by Half

But, I had a "clever" idea. I thought I'd see if I could get by with a circuit that only uses one gate per signal. Not a full debounce, but a circuit called a "Schmitt Trigger." I built it up and tested it on the solderless breadboard.

It worked just great!

Then I built it up on the soldered board. It worked great there, too.

Then I went and bought some prettier switches to use on my new system's box. It worked pretty well. It seemed OK...

Then I did a test fit on the box with all the parts. I pressed my pretty new switches.

Disaster!

When I was pressing the new switches on a handheld box, rather than against a tabletop, I got a whole bunch of signals sent to the 8085 for each time I pressed the switch. The results were pretty ugly.

A Quick Fix is No Fix

I tried out some quick-fixes, as well as making sure all my connections were good. I'd hate to redesign and modify the circuit only to find out the problem was a loose connector all along. The connectors were tight, and the quick fixes helped a bit, but didn't fix the problem. The original (ugly) switches I used worked fine, but I tried a selection of switches in the circuit and most of them had the same problem when hand-held even when they didn't have any problems on the test bench. (I really did test the snot out of this when I first went with the shortcut. But I didn't get all the conditions right for a good enough test. Now I'm paying the price.)

I considered changing out the switches. But this would be a bit of a cop-out. I want to make this something someone else can build and enjoy without having to go through the tweaking and testing and all that I'm doing as I build it. I want others to be able to build it and just have it work, so long as all the bits are in the right places. I can't guarantee that everyone who decides to build it is going to get "clean" switches. So I have to go back and change the design to use real debounce. Then test that, under adverse conditions, like with a rusty knife switch.

One of the quick fixes I tried would probably have been good enough to work under most conditions with a software change that would have delayed any action on the switch input for a fraction of a second. I could have gone with that, and felt sorta pretty good about it. Except that there's a fairly likely condition that that wouldn't cover.

The Deadly Condition

If the user presses the switch, and holds it down for a bit before releasing it, there'll be a second switch event on the release. If someone has a habit of sort of "leaning on" the switches then this will occur to them fairly frequently, and probably ruin their experience using the computer.

I did say these switches are important--in the OS I use them for a warm reset, and vectoring to the user's program. The third one is left for the user in their own programs. But if those functions aren't reliable, it sorta hoses the whole affair. So it's time for me to do the right thing.

Test, Test, Retest, Pray and You Shall Receive

Tomorrow I expect to be protoboarding the correct circuit that I probably should have done in the first place. I'll test it, and if I'm not 100% convinced, I'll use a different sort of circuit, called a "one-shot" or monostable, and make darn well sure with belt and suspenders and a rope tied to a thick tree limb. I may try to avoid adding another IC to the board by using transistors for the last two inverters, or I may try to keep down the varieties of components I use by just dropping in another 4049 in addition to the one there now. On my other part choices I've opted to keep the number of different parts low by reusing the same chips over and over wherever possible.

I'm expecting to put in another 4049, even though it ties up more board space than I like. I might get creative about where I put it, but then again I may not.

Finally, Fitting It In

It's not like I don't have open board space. But I had plans for that space in the future. A memory bank select, another memory IC, and an RS-232 chip. I think the space saved for the RS-232 chip is going to get used. Besides, I may be able to fit in a little Dallas/Maxim chip in one of the odd corners later, or just "float" it on a daughterboard connected to the RS-232 connector.

After all, the core computer has to be working before I start worrying about expansions.