Classes start next week. Between meetings today I had time to slip into the computer lab and spin up the Ubuntu Live CD for Hardy Heron. It works great with the PCs in the school's lab. Networking works fine, video is good, apps run, and so on.
I'm really pleased about this, even if it's not an Edubuntu Live CD as I've mentioned elsewhere. I'll be able to use this disk in class even if it doesn't include some of the Edubuntu apps I'd like to have. I'm not in a position to intall Ubuntu so that I can apply the Edubuntu stuff to it. I need everything on one Live CD.
In addition, I may download the older v.7 distro of Edubuntu Live CD image to get the Edubuntu apps. After all, that's what I was told I was getting when I ordered the CD. The upgrade to Hardy Heron where Edubuntu no longer fits on the same CD as the core Ubuntu is what I got. The fact that Hardy Heron works fine on the school's hardware means that it's reasonable to hope the prior version will do the same (we're not talking cutting edge hardware here.)
I was hoping to spin up some other free OS Live CDs, but my time went into making a copy of Ubuntu for a former student who happened to be in the school center instead. She has a computer at home that's a close cousin to our lab PCs that has some problems that a switch to Linux might fix.
The joy of free software. All I had to do is pull out a blank CD and make a copy for her. No fuss, no muss, no shrink-wrap licenses or fees.
And knowing that I've got at least one Live CD that just ups and runs on the school's computers feels really good.
Wednesday, August 27, 2008
Saturday, August 23, 2008
Four Things I Would Like My Students to Get
It's the start of a new school year. Every year it's very exciting to look forward to the new classes and plan where we'll be going between now and next summer. Not only am I planning out the classes I teach, but I'm working with my daughters on putting the final touches on their plans, and my oldest is preparing to apply to universities for next fall.
This year I'm teaching computer classes at the high school and middle school levels, as well as a cartooning class. The computer classes are elective classes with no specific requirements, which is nice since it means I can do a lot of things with these classes without being bound by the restrictions of an established curriculum. This also means more planning and work for me than if we were just going front to back through a text, but I'd much rather have it this way.
The cartooning class seems like a frivolous class, but I use the fun aspect of it as a hook for the students to make it into more of a class in communication. Alongside lessons on drawing characters I teach the plot skeleton, storytelling techniques, and so on.
No matter what the subject, there are four things I would like my students to get from my class:
I want them to develop the confidence to use the computer as a personal tool for self-expression and communication. I want that confidence to lead them to try new things, and feel that they can teach themselves new things. I want them to perceive the computer as a medium for their use, not as an antagonist.
I want them to have some basic ability to use the computer, some actual knowledge upon which to base their confidence. The specific abilities will vary by the student's age, and by the individual. But they should actually be able to use the facilities of the computer to produce something they've thought of, and to extend their abilities beyond what we're able to cover in class.
They should have some tangible results from their effort. Some finished pieces of work, in whatever form, to demonstrate to themselves and others the abilities they've developed. The products of their work should be impressive within a limited scope of effort, rather than incomplete overambitious efforts. They should be the result of a complete "life cycle" of producing a work, from concept to completion, including the incorporation of some incremental improvement.
Possibly one of the toughest things to give a student is the patience to stick with something. My experience is that having some prior results is one of the best ways to get there. Once they've got something to show that they have an ability, they can come back and apply that ability again on something more ambitious. With the knowledge they've developed of the process of producing something, it's easier for them to commit themselves more fully to their work.
Without something tangible to give them a sense that their time is well spent, they won't have the patience to deal with niggling syntax rules, strange vocabulary, and unfamiliar concepts. So there needs to be an early payoff. This can then be followed by further demands on their time and attention.
Thus, patience is developed. At some point the cross-over is made to the student seeking more knowledge and involvement on their own. When they're working for themselves on a project of personal interest patience becomes practically limitless. They enter the joyful iterative cycle of improvement, both for themselves and their programs, so well known to so many programmers.
This year I'm teaching computer classes at the high school and middle school levels, as well as a cartooning class. The computer classes are elective classes with no specific requirements, which is nice since it means I can do a lot of things with these classes without being bound by the restrictions of an established curriculum. This also means more planning and work for me than if we were just going front to back through a text, but I'd much rather have it this way.
The cartooning class seems like a frivolous class, but I use the fun aspect of it as a hook for the students to make it into more of a class in communication. Alongside lessons on drawing characters I teach the plot skeleton, storytelling techniques, and so on.
No matter what the subject, there are four things I would like my students to get from my class:
- Confidence
- Ability
- Results
- Patience
I want them to develop the confidence to use the computer as a personal tool for self-expression and communication. I want that confidence to lead them to try new things, and feel that they can teach themselves new things. I want them to perceive the computer as a medium for their use, not as an antagonist.
I want them to have some basic ability to use the computer, some actual knowledge upon which to base their confidence. The specific abilities will vary by the student's age, and by the individual. But they should actually be able to use the facilities of the computer to produce something they've thought of, and to extend their abilities beyond what we're able to cover in class.
They should have some tangible results from their effort. Some finished pieces of work, in whatever form, to demonstrate to themselves and others the abilities they've developed. The products of their work should be impressive within a limited scope of effort, rather than incomplete overambitious efforts. They should be the result of a complete "life cycle" of producing a work, from concept to completion, including the incorporation of some incremental improvement.
Possibly one of the toughest things to give a student is the patience to stick with something. My experience is that having some prior results is one of the best ways to get there. Once they've got something to show that they have an ability, they can come back and apply that ability again on something more ambitious. With the knowledge they've developed of the process of producing something, it's easier for them to commit themselves more fully to their work.
Without something tangible to give them a sense that their time is well spent, they won't have the patience to deal with niggling syntax rules, strange vocabulary, and unfamiliar concepts. So there needs to be an early payoff. This can then be followed by further demands on their time and attention.
Thus, patience is developed. At some point the cross-over is made to the student seeking more knowledge and involvement on their own. When they're working for themselves on a project of personal interest patience becomes practically limitless. They enter the joyful iterative cycle of improvement, both for themselves and their programs, so well known to so many programmers.
Labels:
philosophy,
Teaching Computers,
Teaching Programming
Tuesday, August 19, 2008
Busy, Busy, Busy
Preparing for the start of school has had me busy lately, as has a personal project.
I've just about finished a rebuild of my class site, the new version should be posted within the next two days (as I write this the new version exists only on my local disk.) I'm also preparing the early handouts and organizing my class notes.
Also, my daughters and I are making some telescope mirrors. We've just transitioned from polishing to figuring. So a lot of my spare time has been spent hanging over a Foucault tester looking at shadows.
So the blog is neither dead nor forgotten, just off to one side for a bit. Stay tuned!
Tweet
I've just about finished a rebuild of my class site, the new version should be posted within the next two days (as I write this the new version exists only on my local disk.) I'm also preparing the early handouts and organizing my class notes.
Also, my daughters and I are making some telescope mirrors. We've just transitioned from polishing to figuring. So a lot of my spare time has been spent hanging over a Foucault tester looking at shadows.
So the blog is neither dead nor forgotten, just off to one side for a bit. Stay tuned!
Monday, August 11, 2008
Increase Your Depth as a Programmer by Spelunking
Once upon a time I had a sort of game I used to play. It was fostered in part by "downtime" at work, where I had nothing to do immediately, but I couldn't look idle, either. I called it "spelunking." I'd dig into the computer systems that I had access to, trying to find everything on the system that I didn't already know or know about and learn what it was and how to use it.
Unix systems were particularly fun in this regard. I found games I didn't know, and learned how to use utilities that were already on the system that my coworkers were often not even aware were there. I saved my employers a fair amount of money several times by letting them know they already owned the solution to some problem.
Up until the mid 1990s I knew what every program was on several commercial Unix base installations. Then it got beyond me, but I still kept at the game. Now a lot of the game is seeing how many of the old, still useful, tools are still there. The fact that a basic C compiler is typically not part of a normal install of Unix any more bugs me. In my mind it makes as much sense as shipping it without a shell. Nevertheless...
I've also done the same thing with languages. Find some library, some function, some use of a function (or method) that I haven't used before. Then I try to do something useful with it in a program, or apply it to a problem that I've solved in some other way before.
I'll usually start with a specification to find something, but then I'll move to my books. What I often find is that the coverage is awfully homogeneous. I usually find a bit more diversity online in the coverage of any specific item, but often the items covered are fewer than those that appear in books or the original documentation. Sometimes it's clear what sort of problems a feature is meant to solve, other times it's less obvious.
This doesn't usually result in any big revelations or new ways to rework my current project. Occasionally it solves a problem that comes along some time later. More frequently, it gives me a better understanding of the parts of the language I'm already using. I get a better idea of why things are done the way they are, or of ways to divide a coding problem differently to make more effective use of the language.
I'd say spelunking through specifications is second only to reading other people's code as a means of improving my depth in a language. It gets me into things that aren't in my books or that I haven't turned up when looking up something I need to solve a specific problem.
Unix systems were particularly fun in this regard. I found games I didn't know, and learned how to use utilities that were already on the system that my coworkers were often not even aware were there. I saved my employers a fair amount of money several times by letting them know they already owned the solution to some problem.
Up until the mid 1990s I knew what every program was on several commercial Unix base installations. Then it got beyond me, but I still kept at the game. Now a lot of the game is seeing how many of the old, still useful, tools are still there. The fact that a basic C compiler is typically not part of a normal install of Unix any more bugs me. In my mind it makes as much sense as shipping it without a shell. Nevertheless...
I've also done the same thing with languages. Find some library, some function, some use of a function (or method) that I haven't used before. Then I try to do something useful with it in a program, or apply it to a problem that I've solved in some other way before.
I'll usually start with a specification to find something, but then I'll move to my books. What I often find is that the coverage is awfully homogeneous. I usually find a bit more diversity online in the coverage of any specific item, but often the items covered are fewer than those that appear in books or the original documentation. Sometimes it's clear what sort of problems a feature is meant to solve, other times it's less obvious.
This doesn't usually result in any big revelations or new ways to rework my current project. Occasionally it solves a problem that comes along some time later. More frequently, it gives me a better understanding of the parts of the language I'm already using. I get a better idea of why things are done the way they are, or of ways to divide a coding problem differently to make more effective use of the language.
I'd say spelunking through specifications is second only to reading other people's code as a means of improving my depth in a language. It gets me into things that aren't in my books or that I haven't turned up when looking up something I need to solve a specific problem.
Labels:
Programming
Thursday, August 7, 2008
Mini-Review: Beginning to Program in Java for Dummies by Barry Burd
I've posted a mini-review of the book Beginning to Program in Java for Dummies on my Java beginners' blog A Beginning Programmer's Guide to Java.
Labels:
Book Review,
Java,
Teaching Computers,
Teaching Programming
Learn Another Language
There are a lot of computer languages out there. Each one has vocal adherents. They often sound like the folks who used to battle over whether the Commodore 64 was better than the Atari 800. The fact is that computer languages are created for a reason, they're created to solve a problem. Each one has its own features, and its own approach to dealing with specific problems that the implementors saw with existing languages.
Learning a computer language is an involved process. Learning modern object-oriented languages well enough to be well rounded in all their features is a time-consuming process. I'd say it takes two years to become competent, longer to become really good at solving any arbitrary problem with the language. With this kind of a time investment, why dilute your efforts by learning more than one language?
I took English classes all the way through elementary school. I got good at spelling, and developed a pretty good vocabulary. But most of the grammar went right over my head. In 7th grade I started taking a class in German. Suddenly all this stuff about subjects, objects, and indirect objects started to make sense. It was enough to make me wonder why I'd never gotten it before, it seemed so simple.
The same sort of revelation can occur when learning a new computer language. Concepts that were muddy before can suddenly be illuminated by the new language. This is especially true when learning a language that's been designed for an entirely different class of programs than your first language.
The "big" languages today are far more flexible and more generally applicable to a wide range of problems and programming approaches than the simpler languages of 30 years ago. This is part of why it takes longer to learn them thoroughly. They've adopted the good parts of many earlier languages, usually building on a base of a strong prior language. The result is that it can make each language look like it's the be-all and end-all of languages. They fill a lot of checkboxes, and even where they don't have a specific feature, they have some other fairly direct way to get the same results.
This doesn't mean they're all the same, however.
Why Spend the Time?
Still, there's only so much time and energy. With learning a language being such a substantial undertaking, why divert effort into a new language rather than delving deeper into the one you already know?
First, diversification is not only educational, it's a survival skill. Overcommitment to one language means that when the need to shift finally comes, it'll be that much harder to change one's thinking to adopt a new language effectively. Languages will change, new languages will appear that will overshadow current languages. Current languages aren't likely to go away--code tends to hang around--but new languages will dominate. Even if you still work with an older language, understanding newer languages and what they do will be valuable to you. Chances are your language will change and adapt, too. A prior understanding of the new concepts will make it that much easier to take advantage of these changes. Either way, you'll be a more valuable programmer.
And it will be easier to learn the new languages if you are already familiar with several. Those new languages are going to come from somewhere, and if you've already got a regular approach for picking up new languages, you'll just be applying it to yet another.
Educationally, there's more to learn by picking up new languages than the new syntax and programming constructs. Learning a new language often means learning to work in a new development environment, and possibly with a very different workflow. You'll have a chance to see what other programmers use, and see the advantages. Perhaps the same features are already there in the IDE you use with your own language, you've just never been motivated to try it out since you didn't perceive the advantage. Each different language tends to have a preferred set of tools for version control and project management, too. Even if you only ever program professionally in one language knowing more than one toolset for everything from editing to distribution is of tremendous value.
There's nothing worse than knowing the language that everyone is advertising for then get passed over for job after job because you don't have experience with the employer's environment.
If you're a lone wolf developer, or part of a small team, knowing more than one thing is even more important. It's essential to get as much productivity as possible out of the few work hours available. You can't afford to spend your time carrying water from the well in buckets, you've got to put in plumbing to be productive. If you don't get out of your groove and look around at what other languages and other toolsets have to offer, chances are you're carrying buckets and complaining about how there's no time to put down the buckets to learn plumbing.
Picking a Second Language
Choosing the language to learn is important, too. It shouldn't be over-thought, however. It's not like it's the last language you'll ever learn. You can also break off and change choices at any time, though it's best to stick with one long enough to get well enough in to understand why it does things the way it does.
For your new language, you may want to pick a second language close to the one you already know. This can be both good and bad. It can introduce you to some new things without requiring as much of a change of pace as a totally unrelated language. The problem is that you're not getting as far out of your groove as you would with something else, and the syntaxes of two similar languages might just get jumbled in your head making it even harder to use your primary language.
Picking a language that's out of left field for you will require more effort to learn and understand, but it avoids the problems of picking a closely related language.
One way to diversify is to select a language that's of a different scope than your current language. If you're programming in one of the more "full-up" languages OO like C++ or Java, you may want to pick up one of the "scripting" languages like python or ruby. Or you can take a step even lower to the actual scripting languages like one of the Unix shell languages (ksh, bash, csh, etc.) If you already know one of the simpler languages (whether simpler in features or in syntax--I don't mean any of these comparisons as value judgements) you should probably give a try to something more involved.
There's also diversification in terms of language type. This relates to the approach the language takes to solving problems, and the mindset used with the language. If you know a procedural language, try a functional programming language, perhaps. If you're not doing object oriented programming, try one of the OO languages--particularly one of the OO scripting languages.
If you want to limit your efforts, there are a lot of languages that you can learn without too much effort--at least well enough to get a feel for the language's approach. Clearly the syntax-light "scripting" languages fulfill this requirement. But other languages, such as LISP and LOGO are also very easy to learn, and they're often well off track for most programmers relative to their usual language, making them a great learning experience. I know that LISP and its related languages are going through a bit of a vogue right now, my recommendation has nothing to do with that. I've been recommending LISP since I first learned it in a single quarter's class in 1981. And though LOGO is often considered a "kid's language," I personally consider it more of a LISP with a lot of the historical cruft taken out. Either way, the current vogue of LISP-y languages is a good thing. It means there are many good, modern implementations and tool sets for them.
Forth is also easy to learn, and playing with its guts is both fun and educational. You don't necessarily have to join the religion. ;)
Assembly programming is not as straightforward as it once was, interactions with modern OSes make it more involved. However, learning assembly using a virtual machine of some sort, solves this problem. There are free, downloadable development and simulation environments for many microcontrollers. This can be a good way to get started with assembly.
If you only know one language, get out and learn another. It'll be fun, it will stretch your brain a bit, and it'll make you a better programmer, even in your first language.
Learning a computer language is an involved process. Learning modern object-oriented languages well enough to be well rounded in all their features is a time-consuming process. I'd say it takes two years to become competent, longer to become really good at solving any arbitrary problem with the language. With this kind of a time investment, why dilute your efforts by learning more than one language?
I took English classes all the way through elementary school. I got good at spelling, and developed a pretty good vocabulary. But most of the grammar went right over my head. In 7th grade I started taking a class in German. Suddenly all this stuff about subjects, objects, and indirect objects started to make sense. It was enough to make me wonder why I'd never gotten it before, it seemed so simple.
The same sort of revelation can occur when learning a new computer language. Concepts that were muddy before can suddenly be illuminated by the new language. This is especially true when learning a language that's been designed for an entirely different class of programs than your first language.
The "big" languages today are far more flexible and more generally applicable to a wide range of problems and programming approaches than the simpler languages of 30 years ago. This is part of why it takes longer to learn them thoroughly. They've adopted the good parts of many earlier languages, usually building on a base of a strong prior language. The result is that it can make each language look like it's the be-all and end-all of languages. They fill a lot of checkboxes, and even where they don't have a specific feature, they have some other fairly direct way to get the same results.
This doesn't mean they're all the same, however.
Why Spend the Time?
Still, there's only so much time and energy. With learning a language being such a substantial undertaking, why divert effort into a new language rather than delving deeper into the one you already know?
First, diversification is not only educational, it's a survival skill. Overcommitment to one language means that when the need to shift finally comes, it'll be that much harder to change one's thinking to adopt a new language effectively. Languages will change, new languages will appear that will overshadow current languages. Current languages aren't likely to go away--code tends to hang around--but new languages will dominate. Even if you still work with an older language, understanding newer languages and what they do will be valuable to you. Chances are your language will change and adapt, too. A prior understanding of the new concepts will make it that much easier to take advantage of these changes. Either way, you'll be a more valuable programmer.
And it will be easier to learn the new languages if you are already familiar with several. Those new languages are going to come from somewhere, and if you've already got a regular approach for picking up new languages, you'll just be applying it to yet another.
Educationally, there's more to learn by picking up new languages than the new syntax and programming constructs. Learning a new language often means learning to work in a new development environment, and possibly with a very different workflow. You'll have a chance to see what other programmers use, and see the advantages. Perhaps the same features are already there in the IDE you use with your own language, you've just never been motivated to try it out since you didn't perceive the advantage. Each different language tends to have a preferred set of tools for version control and project management, too. Even if you only ever program professionally in one language knowing more than one toolset for everything from editing to distribution is of tremendous value.
There's nothing worse than knowing the language that everyone is advertising for then get passed over for job after job because you don't have experience with the employer's environment.
If you're a lone wolf developer, or part of a small team, knowing more than one thing is even more important. It's essential to get as much productivity as possible out of the few work hours available. You can't afford to spend your time carrying water from the well in buckets, you've got to put in plumbing to be productive. If you don't get out of your groove and look around at what other languages and other toolsets have to offer, chances are you're carrying buckets and complaining about how there's no time to put down the buckets to learn plumbing.
Picking a Second Language
Choosing the language to learn is important, too. It shouldn't be over-thought, however. It's not like it's the last language you'll ever learn. You can also break off and change choices at any time, though it's best to stick with one long enough to get well enough in to understand why it does things the way it does.
For your new language, you may want to pick a second language close to the one you already know. This can be both good and bad. It can introduce you to some new things without requiring as much of a change of pace as a totally unrelated language. The problem is that you're not getting as far out of your groove as you would with something else, and the syntaxes of two similar languages might just get jumbled in your head making it even harder to use your primary language.
Picking a language that's out of left field for you will require more effort to learn and understand, but it avoids the problems of picking a closely related language.
One way to diversify is to select a language that's of a different scope than your current language. If you're programming in one of the more "full-up" languages OO like C++ or Java, you may want to pick up one of the "scripting" languages like python or ruby. Or you can take a step even lower to the actual scripting languages like one of the Unix shell languages (ksh, bash, csh, etc.) If you already know one of the simpler languages (whether simpler in features or in syntax--I don't mean any of these comparisons as value judgements) you should probably give a try to something more involved.
There's also diversification in terms of language type. This relates to the approach the language takes to solving problems, and the mindset used with the language. If you know a procedural language, try a functional programming language, perhaps. If you're not doing object oriented programming, try one of the OO languages--particularly one of the OO scripting languages.
If you want to limit your efforts, there are a lot of languages that you can learn without too much effort--at least well enough to get a feel for the language's approach. Clearly the syntax-light "scripting" languages fulfill this requirement. But other languages, such as LISP and LOGO are also very easy to learn, and they're often well off track for most programmers relative to their usual language, making them a great learning experience. I know that LISP and its related languages are going through a bit of a vogue right now, my recommendation has nothing to do with that. I've been recommending LISP since I first learned it in a single quarter's class in 1981. And though LOGO is often considered a "kid's language," I personally consider it more of a LISP with a lot of the historical cruft taken out. Either way, the current vogue of LISP-y languages is a good thing. It means there are many good, modern implementations and tool sets for them.
Forth is also easy to learn, and playing with its guts is both fun and educational. You don't necessarily have to join the religion. ;)
Assembly programming is not as straightforward as it once was, interactions with modern OSes make it more involved. However, learning assembly using a virtual machine of some sort, solves this problem. There are free, downloadable development and simulation environments for many microcontrollers. This can be a good way to get started with assembly.
If you only know one language, get out and learn another. It'll be fun, it will stretch your brain a bit, and it'll make you a better programmer, even in your first language.
Labels:
IDE,
philosophy,
Programming
Monday, August 4, 2008
The Anti-Java Professor: Still Off-Base
When we first heard from "the Anti-Java Professor" it created quite a stir. This made it inevitable that we'd hear from him again.
I disagreed with his points earlier, and I disagree with his points and his reasons now. I think I stand in a pretty good position to do so. As an instructor, I'm not in a position comparable to his, I'm just a part-time high and middle school instructor. Otherwise, however, I am an aerospace engineer. My work covers a wide range of systems for aircraft, launch vehicles, satellites and probes. I've done design work for nuclear systems and man-rated systems. I've been doing safety-critical design work for about 28 years now. My work includes mechanical, electronic, and software design--usually more than one at a time on any given project.
So let's take a look at what the "Anti-Java Professor," Dr. Robert Dewar, is saying this time around.
On a high level, the presentation suggests that universities are turning out what are in essence incompetent script-kiddies by using Java in the core curriculum classes for Computer Science. The claimed cause is all those libraries that Java has, insulating the student from "real" programming.
He states that Java programmers aren't employable because their function is too easily outsourced. Without the ability to create "complex systems" their skills aren't up to snuff. His concept of what makes for a "complex system" seems to be circumscribed by an aerospace/defense mindset, as he says that Java does not appear in complex systems, only in "trivial" applications, using the example of code used in aircraft as the place that Java doesn't appear. C and Ada are called out as being the "suitable" languages.
I've been around long enough to remember when C was the latest hazard to our aircrafts' software. I remember when our poor students were being under served by that poor language, Pascal. When the only thing you found in "real" applications was languages like JOVIAL, PL/I, machine and assembly language. The fuss he makes about Java sounds like the fuss over barbed wire to me. That awful, poorly structured, type-unsafe language C was the end of the world to some. You didn't find it in aircraft, 25 years ago. It had all these library functions--how could the programmer know what lurked inside them? How could you validate code written on top of them?
Well, it's been managed. There are compilers and libraries for C that are validated and well documented. Now there's C code flying.
Does that make C a better language? No. There are plenty of other uses for C, from writing entertainment software to system code, that don't require the same level of code validation as an aircraft's instrumentation, communication, and control systems. C came to aircraft because it was an effective language to code in and for writing maintainable code, and because there was a huge base of labor to draw on.
Java has those advantages, too. Maybe it's time to get a JVM validated for use with safety-critical code? That's all it would take to get Java in the air. In fact, it would seem to me that a validated runtime environment like a JVM with the appropriate low level reliability features could be a safer software environment than one based on C, or even ADA. In most systems that have the space and processing time available (which is many now) a base environment with applications running on top of it is typical. In some cases this is a high reliability RTOS, in others it's a specially-written core system. There's no reason it couldn't just as well be a hi-rel JVM.
There's also the matter of limiting one's perception of "complex systems" to aircraft code. Distributed database systems and transaction processing systems don't count as "complex"? Why, because the programmers don't necessarily have to code every algorithm from scratch? Java has been most successful on the server, in some cases acting as a translation layer between an underlying application and the web presentation (what I'm presuming Dr. Dewar considers "trivial"), in other cases it is far more than that. Enough to be called "complex" in my book. Does anyone die when this code goes down? Not necessarily, but that's not a measure of complexity.
Does Java make programmers incompetent?
Dr. Dewar claims so. He lists some questions he'd use to screen applicants. What's interesting about these questions to me is that I think the average BSCS grad would have had just as much trouble with these questions 30 years ago as they would today. I also consider these questions to be just as narrowly focused as his view of complex systems. Paraphrased, the questions are:
How would you find an error caused by improper compilation by the compiler?
How would you identify and deal with a bug in your processor's implementation?
For as long as I can remember, the undergrad CS program has only required two classes that even address these areas. One class in hardware, one in assembly language. Both are brief introductions to their subject. There isn't time in a one-quarter or one-semester class to get beyond this, particularly when it's first exposure for the majority of students. Operating systems classes are no longer about system level code, and haven't been for a long time. There's far more to talk about in an operating systems class than implementation of program loaders and I/O drivers these days.
The first question could be covered to some degree through the use of a plain debugger. I think the solution he'd look for would be to go a step further, and look at the object code. Usually a student will have gone through these steps at some point in their education, but they aren't, and haven't been, something that is routinely performed in most programming classes at the undergraduate level, no matter what language is used.
Hardware classes at the undergraduate level spend a lot of time on flip flops and gates. The better ones talk about processor and system architectures. Identifying bugs in silicon is usually well out of scope.
There's also this thing called the "internet." Seldom do we have to be pioneers when it comes to using compilers and processors any more. Before, there were long delays in information transfer. Likewise, if you're writing code for aircraft, you're not going to be an early adopter. Looking up known issues is likely to be a far more useful skill than finding them yourself the hard way. And if you're coding for ASICs, well, expect some OJT and experience between the time they hand you your sheepskin and the time you lay hands on code.
When I went through college, we did have some undergraduates who knew this stuff. Not one of them learned it in school. When you came into any class, you could quickly pick out the enthusiasts who made a hobby of computers and electronics. And this group mostly didn't wait until they had a degree to go out and get a job using what they knew.
There are study programs that encourage a lot of hands-on work. And that get students out into internships in a real working environment. These are great, but a student can do the same thing on their own as well. By the time I went to college I had held a job as a programmer, a job as an electronic technician, and had run my own company selling electronic and computer systems of my own design. In the case of universities and colleges, my own feeling is that waiting for a "senior project" to do practical work is waiting too long. The problem is that the curriculum is so filled with classes of low value that it prevents a student from both getting a good GPA and holding down a job relevant to their major. It's possible, but it's a good way to burn out a promising programmer, too. Unfortunately, accreditation boards don't seem to reward institutions for getting students into real work, but they seem to support lots of non-major classes.
An attack on the Java programming language as the source of the job market's ills is misplaced at best. Java isn't the problem. It doesn't make headlines to say that the problem is the same as it always has been: non-major requirements for degrees prevent any depth in a four-year program. The other half of this is a problem that's really no problem at all: Only a fraction of in-major graduates are ever going to be drawn to work that goes beyond creating applications, mash-ups, or other high level code. This is both because of personal preferences and because of the market itself.
The place where a tragedy occurs here is when a student goes through a program, gets a degree, and thinks that makes them prepared for any programming job. The answer to this is the same as it always has been. Students need to get their heads out of the school zone. They need to stop relying on the university as a primary source of education. The resources for self-education are better than they have ever been before. The opportunities for doing real work with either hardware or software outside of school are the best they've ever been. Either as a hobby or in an entry-level job.
This is where the real education happens. It also makes the university education more valuable. Students come prepared to learn, and with a desire to learn, when they already have some background. They get the full advantage of the broadening aspects of college. They can interact with their professors at a much higher level, because they aren't having to learn the language first and have at least a general grasp of the basics. The elements that fit into theoretical frameworks are familiar to them--they aren't memorizing them by rote and hoping that it'll all make sense somewhere down the road.
Among the solutions Dr. Dewar calls for to the problem of unprepared programmers is one on one mentoring. This isn't realistically available for most undergraduates. At least not in school. It is available to interns, however. It's also available to hobbyists who are able to find a group of fellow enthusiasts.
One of the biggest lessons I've learned in my many years of working, interviewing, and managing projects is that specific technical skills are far less important than outlook and character. Technical skills are the easiest skills to teach. They're also the skills that must be updated and replaced regularly.
Actual on the job work that's going to be handed to a new hire is typically going to be work that already has an established procedure. That means it's going to be relatively easy to teach, and that the situations that arise in the performance of that work are pretty well known. That includes the identification of compiler and hardware bugs. What becomes of paramount importance is the team itself and its willingness to share information, and the character of the new worker and their willingness to partake of the information and shared lore of the rest of the group.
This isn't to say that one would want to hire someone to program C or assembly who doesn't have any relevant experience. But it also doesn't mean that using Java makes them a bad candidate.
It may be time consuming to interview candidates, but turning away candidates is an educational process. If candidates are turned away without getting a sense of where they failed in the interview, then it is just a waste. But if they come away knowing that they couldn't answer a battery of questions on handling registers in C, then they have information they can act on.
They may come away with the knowledge that the job was not for them. Or they can buck up, go home, and download a C compiler and some resources, and get cracking. They may end up delivering pizzas for a few weeks, but chances are that'll give them even more incentive to press on with their own studies. In less than a semester, they can acquire the skills to manipulate registers in C, debug object code, and familiarize themselves with the practices of writing hi-rel code. No matter what language they studied in school.
If a student comes out of a university not knowing how to expand their education in this way, then the university has done them a far greater disservice than the use of any language you care to name.
Debating over Java's use in education is a sure headline-grabber these days. It's a cheap and easy ploy, and its intellectually dishonest. Java is not the problem, any more than Pascal was when it replaced Fortran, or C was when it replaced Pascal in the educational curriculum. In each case the universities were making the change to serve their students.
The problem is the wall between education and reality. Some institutions do a lot to get past this, but not many, and even that's no guarantee that a student is going to get any advantage. If students are using Java to write "trivial" web apps, then at least they are on the web. They're getting far more exposure to reality than any student who spent the same time writing a bubble-sort in Fortran, or Pascal, or C. For them the ideas of networking, clients, and servers are like air; it's just there. They start with a lot of knowledge they don't even realize they know. Sure, they can't debug a bad instruction in a processor, but how many CS graduates of the past 30 years could? The ones with an EE minor, maybe.
And if you want to limit your time on interview, put a section in the application for hobbies and magazines read. If you don't see something like electronics, robotics, or microcontrollers in the hobbies or something like Circuit Cellar Ink, Embedded Development, or other trade journals in the magazines they probably shouldn't be on your 'A' list if you want someone who can debug assembly and processors.
I disagreed with his points earlier, and I disagree with his points and his reasons now. I think I stand in a pretty good position to do so. As an instructor, I'm not in a position comparable to his, I'm just a part-time high and middle school instructor. Otherwise, however, I am an aerospace engineer. My work covers a wide range of systems for aircraft, launch vehicles, satellites and probes. I've done design work for nuclear systems and man-rated systems. I've been doing safety-critical design work for about 28 years now. My work includes mechanical, electronic, and software design--usually more than one at a time on any given project.
So let's take a look at what the "Anti-Java Professor," Dr. Robert Dewar, is saying this time around.
On a high level, the presentation suggests that universities are turning out what are in essence incompetent script-kiddies by using Java in the core curriculum classes for Computer Science. The claimed cause is all those libraries that Java has, insulating the student from "real" programming.
He states that Java programmers aren't employable because their function is too easily outsourced. Without the ability to create "complex systems" their skills aren't up to snuff. His concept of what makes for a "complex system" seems to be circumscribed by an aerospace/defense mindset, as he says that Java does not appear in complex systems, only in "trivial" applications, using the example of code used in aircraft as the place that Java doesn't appear. C and Ada are called out as being the "suitable" languages.
I've been around long enough to remember when C was the latest hazard to our aircrafts' software. I remember when our poor students were being under served by that poor language, Pascal. When the only thing you found in "real" applications was languages like JOVIAL, PL/I, machine and assembly language. The fuss he makes about Java sounds like the fuss over barbed wire to me. That awful, poorly structured, type-unsafe language C was the end of the world to some. You didn't find it in aircraft, 25 years ago. It had all these library functions--how could the programmer know what lurked inside them? How could you validate code written on top of them?
Well, it's been managed. There are compilers and libraries for C that are validated and well documented. Now there's C code flying.
Does that make C a better language? No. There are plenty of other uses for C, from writing entertainment software to system code, that don't require the same level of code validation as an aircraft's instrumentation, communication, and control systems. C came to aircraft because it was an effective language to code in and for writing maintainable code, and because there was a huge base of labor to draw on.
Java has those advantages, too. Maybe it's time to get a JVM validated for use with safety-critical code? That's all it would take to get Java in the air. In fact, it would seem to me that a validated runtime environment like a JVM with the appropriate low level reliability features could be a safer software environment than one based on C, or even ADA. In most systems that have the space and processing time available (which is many now) a base environment with applications running on top of it is typical. In some cases this is a high reliability RTOS, in others it's a specially-written core system. There's no reason it couldn't just as well be a hi-rel JVM.
There's also the matter of limiting one's perception of "complex systems" to aircraft code. Distributed database systems and transaction processing systems don't count as "complex"? Why, because the programmers don't necessarily have to code every algorithm from scratch? Java has been most successful on the server, in some cases acting as a translation layer between an underlying application and the web presentation (what I'm presuming Dr. Dewar considers "trivial"), in other cases it is far more than that. Enough to be called "complex" in my book. Does anyone die when this code goes down? Not necessarily, but that's not a measure of complexity.
Does Java make programmers incompetent?
Dr. Dewar claims so. He lists some questions he'd use to screen applicants. What's interesting about these questions to me is that I think the average BSCS grad would have had just as much trouble with these questions 30 years ago as they would today. I also consider these questions to be just as narrowly focused as his view of complex systems. Paraphrased, the questions are:
How would you find an error caused by improper compilation by the compiler?
How would you identify and deal with a bug in your processor's implementation?
For as long as I can remember, the undergrad CS program has only required two classes that even address these areas. One class in hardware, one in assembly language. Both are brief introductions to their subject. There isn't time in a one-quarter or one-semester class to get beyond this, particularly when it's first exposure for the majority of students. Operating systems classes are no longer about system level code, and haven't been for a long time. There's far more to talk about in an operating systems class than implementation of program loaders and I/O drivers these days.
The first question could be covered to some degree through the use of a plain debugger. I think the solution he'd look for would be to go a step further, and look at the object code. Usually a student will have gone through these steps at some point in their education, but they aren't, and haven't been, something that is routinely performed in most programming classes at the undergraduate level, no matter what language is used.
Hardware classes at the undergraduate level spend a lot of time on flip flops and gates. The better ones talk about processor and system architectures. Identifying bugs in silicon is usually well out of scope.
There's also this thing called the "internet." Seldom do we have to be pioneers when it comes to using compilers and processors any more. Before, there were long delays in information transfer. Likewise, if you're writing code for aircraft, you're not going to be an early adopter. Looking up known issues is likely to be a far more useful skill than finding them yourself the hard way. And if you're coding for ASICs, well, expect some OJT and experience between the time they hand you your sheepskin and the time you lay hands on code.
When I went through college, we did have some undergraduates who knew this stuff. Not one of them learned it in school. When you came into any class, you could quickly pick out the enthusiasts who made a hobby of computers and electronics. And this group mostly didn't wait until they had a degree to go out and get a job using what they knew.
There are study programs that encourage a lot of hands-on work. And that get students out into internships in a real working environment. These are great, but a student can do the same thing on their own as well. By the time I went to college I had held a job as a programmer, a job as an electronic technician, and had run my own company selling electronic and computer systems of my own design. In the case of universities and colleges, my own feeling is that waiting for a "senior project" to do practical work is waiting too long. The problem is that the curriculum is so filled with classes of low value that it prevents a student from both getting a good GPA and holding down a job relevant to their major. It's possible, but it's a good way to burn out a promising programmer, too. Unfortunately, accreditation boards don't seem to reward institutions for getting students into real work, but they seem to support lots of non-major classes.
An attack on the Java programming language as the source of the job market's ills is misplaced at best. Java isn't the problem. It doesn't make headlines to say that the problem is the same as it always has been: non-major requirements for degrees prevent any depth in a four-year program. The other half of this is a problem that's really no problem at all: Only a fraction of in-major graduates are ever going to be drawn to work that goes beyond creating applications, mash-ups, or other high level code. This is both because of personal preferences and because of the market itself.
The place where a tragedy occurs here is when a student goes through a program, gets a degree, and thinks that makes them prepared for any programming job. The answer to this is the same as it always has been. Students need to get their heads out of the school zone. They need to stop relying on the university as a primary source of education. The resources for self-education are better than they have ever been before. The opportunities for doing real work with either hardware or software outside of school are the best they've ever been. Either as a hobby or in an entry-level job.
This is where the real education happens. It also makes the university education more valuable. Students come prepared to learn, and with a desire to learn, when they already have some background. They get the full advantage of the broadening aspects of college. They can interact with their professors at a much higher level, because they aren't having to learn the language first and have at least a general grasp of the basics. The elements that fit into theoretical frameworks are familiar to them--they aren't memorizing them by rote and hoping that it'll all make sense somewhere down the road.
Among the solutions Dr. Dewar calls for to the problem of unprepared programmers is one on one mentoring. This isn't realistically available for most undergraduates. At least not in school. It is available to interns, however. It's also available to hobbyists who are able to find a group of fellow enthusiasts.
One of the biggest lessons I've learned in my many years of working, interviewing, and managing projects is that specific technical skills are far less important than outlook and character. Technical skills are the easiest skills to teach. They're also the skills that must be updated and replaced regularly.
Actual on the job work that's going to be handed to a new hire is typically going to be work that already has an established procedure. That means it's going to be relatively easy to teach, and that the situations that arise in the performance of that work are pretty well known. That includes the identification of compiler and hardware bugs. What becomes of paramount importance is the team itself and its willingness to share information, and the character of the new worker and their willingness to partake of the information and shared lore of the rest of the group.
This isn't to say that one would want to hire someone to program C or assembly who doesn't have any relevant experience. But it also doesn't mean that using Java makes them a bad candidate.
It may be time consuming to interview candidates, but turning away candidates is an educational process. If candidates are turned away without getting a sense of where they failed in the interview, then it is just a waste. But if they come away knowing that they couldn't answer a battery of questions on handling registers in C, then they have information they can act on.
They may come away with the knowledge that the job was not for them. Or they can buck up, go home, and download a C compiler and some resources, and get cracking. They may end up delivering pizzas for a few weeks, but chances are that'll give them even more incentive to press on with their own studies. In less than a semester, they can acquire the skills to manipulate registers in C, debug object code, and familiarize themselves with the practices of writing hi-rel code. No matter what language they studied in school.
If a student comes out of a university not knowing how to expand their education in this way, then the university has done them a far greater disservice than the use of any language you care to name.
Debating over Java's use in education is a sure headline-grabber these days. It's a cheap and easy ploy, and its intellectually dishonest. Java is not the problem, any more than Pascal was when it replaced Fortran, or C was when it replaced Pascal in the educational curriculum. In each case the universities were making the change to serve their students.
The problem is the wall between education and reality. Some institutions do a lot to get past this, but not many, and even that's no guarantee that a student is going to get any advantage. If students are using Java to write "trivial" web apps, then at least they are on the web. They're getting far more exposure to reality than any student who spent the same time writing a bubble-sort in Fortran, or Pascal, or C. For them the ideas of networking, clients, and servers are like air; it's just there. They start with a lot of knowledge they don't even realize they know. Sure, they can't debug a bad instruction in a processor, but how many CS graduates of the past 30 years could? The ones with an EE minor, maybe.
And if you want to limit your time on interview, put a section in the application for hobbies and magazines read. If you don't see something like electronics, robotics, or microcontrollers in the hobbies or something like Circuit Cellar Ink, Embedded Development, or other trade journals in the magazines they probably shouldn't be on your 'A' list if you want someone who can debug assembly and processors.
I've updated this article to use the proper form of address for Dr. Dewar. At the time I wrote this article I failed to find a reference for this. Since then a reader, "aquerman" has provided me with the appropriate information. No disrespect toward Dr. Dewar was intended through the failure to use his proper title earlier, regardless of our divergent views on the role of Java with respect to programmer incompetence.
Labels:
Java,
philosophy,
Programming,
Teaching Programming
Friday, August 1, 2008
Class Objective: Going from Consumer to Creator
Teaching a computer class in high school is a lot like teaching a class in the fine arts. You want to teach the students to express themselves. You provide them with basic information on tools and techniques. Most of all, you try to help them overcome the mental barriers that keep them from believing that they can produce something worthwhile.
These barriers are many. Just like a student who hesitates to pick up a brush because they "know" they can't paint well, the computer student will resist trying their hand at some skill. The resistance takes a different form, usually, than refusing to try. Computers are the perfect tool for passive-aggressive behavior. There's always an obstacle for someone who doesn't want to try.
The problem starts long before students enter the class. Students are conditioned to believe that it's not safe for them to mess with the computer. They adopt a belief that the things they enjoy on the computer have been created by teams of professionals who are smarter than they. In some cases, they have an attitude that they can't be bothered to learn it since this will look like a crack in their "I already know it all" veneer.
The expectations of parents are often limited, too. For many, computer class is a typing class. Certainly it's a good place to get some time in on a keyboard, but typing is learned far better when there's a purpose driving the fingers on the keys. So treating the course as a mind-numbingly dull typing class would defeat that. Not to say there aren't students who like dull rote classes for an easy 'A' without engaging the brain.
Another problem is expectations that are too high. Some students want to write a full A-class computer game in a single semester. Interest wanes when they find out that that's not likely. They want to run before they can walk. Then the conviction that they can't do what the pros do comes in again.
The most effective attack on negative mental attitudes is to have exercises that result in visually stimulating results. The problem is that the range of skill levels coming into the class is so wide that some students may need to be told what "left-click" and "right-click" means, and they may be totally unfamiliar with the keyboard. Others may already be fast typists and fast readers.
My own way of dealing with this has been to have open-ended in-class exercises. There is a basic level of achievement that is expected. Beyond that, extra credit is given for creative effort beyond what's required. Multiple avenues for extension of the basic assignment are provided. Usually they all require some degree of research on the part of the student. This gives more advanced students an opportunity to use their time constructively while I assist others with bootstrapping themselves on the basics.
I also allow collaboration and sharing of information on class exercises. This removes me as an information bottleneck in class, and it gives the students a social incentive to research solutions to problems.
As to the types of problems, I start with creating a rich document in a word processor. It seems like a banal exercise, particularly when so many high school computer classes are nothing more than an introduction to horizontal applications. But rather than focusing on the use of the application itself, I focus on the product.
I don't assign a specific document to write. Rather, I work with the class to identify something that would be fun to write. Last year we came up with a specification for a computer video game. It could as well have been a guide to online art of a specific genre, or any of a number of other subjects that interest the students. I place requirements for what needs to be in the document on the basis of what would make a good document of that type, not on the basis of the mechanics of the programs we use to create its content.
As we progress from word processing to using basic basic tools for text editing, file management, system monitoring, and so on I use the same approach. With a goal in mind that interests the students, we can focus on that and learn the technical mechanics along the way.
The result is that the students get an opportunity to create something of their own, not just another copy of a document given as an assignment. They get to learn what they learn for a reason, not as an end in itself to fill grading checkboxes.
I allow the students to express themselves with their assignments. We keep things within the bounds of propriety, of course. At first they're tentative, but by the time we're doing web pages they've usually gotten to by outspoken. I don't grade them on what they say on their web pages, but on how they say it. What features of HTML have they applied, how does the page appear as a whole, and so on.
Then, after each unit in class I have a little surprise. I take some time to list for them all the technical skills they've acquired. All that stuff that goes on resumes or that gets listed as a class subject on the syllabus. The lists are long, and full of all sorts of scary polysyllabic words. We go through each scary word, and they learn that that scary word is something they know and can handle. We review their work, I comment on it, and their peers do, too.
They discover something. It's something we keep discovering it over and over in class. A message that bears repeating, over and over.
They can create.
These barriers are many. Just like a student who hesitates to pick up a brush because they "know" they can't paint well, the computer student will resist trying their hand at some skill. The resistance takes a different form, usually, than refusing to try. Computers are the perfect tool for passive-aggressive behavior. There's always an obstacle for someone who doesn't want to try.
The problem starts long before students enter the class. Students are conditioned to believe that it's not safe for them to mess with the computer. They adopt a belief that the things they enjoy on the computer have been created by teams of professionals who are smarter than they. In some cases, they have an attitude that they can't be bothered to learn it since this will look like a crack in their "I already know it all" veneer.
The expectations of parents are often limited, too. For many, computer class is a typing class. Certainly it's a good place to get some time in on a keyboard, but typing is learned far better when there's a purpose driving the fingers on the keys. So treating the course as a mind-numbingly dull typing class would defeat that. Not to say there aren't students who like dull rote classes for an easy 'A' without engaging the brain.
Another problem is expectations that are too high. Some students want to write a full A-class computer game in a single semester. Interest wanes when they find out that that's not likely. They want to run before they can walk. Then the conviction that they can't do what the pros do comes in again.
The most effective attack on negative mental attitudes is to have exercises that result in visually stimulating results. The problem is that the range of skill levels coming into the class is so wide that some students may need to be told what "left-click" and "right-click" means, and they may be totally unfamiliar with the keyboard. Others may already be fast typists and fast readers.
My own way of dealing with this has been to have open-ended in-class exercises. There is a basic level of achievement that is expected. Beyond that, extra credit is given for creative effort beyond what's required. Multiple avenues for extension of the basic assignment are provided. Usually they all require some degree of research on the part of the student. This gives more advanced students an opportunity to use their time constructively while I assist others with bootstrapping themselves on the basics.
I also allow collaboration and sharing of information on class exercises. This removes me as an information bottleneck in class, and it gives the students a social incentive to research solutions to problems.
As to the types of problems, I start with creating a rich document in a word processor. It seems like a banal exercise, particularly when so many high school computer classes are nothing more than an introduction to horizontal applications. But rather than focusing on the use of the application itself, I focus on the product.
I don't assign a specific document to write. Rather, I work with the class to identify something that would be fun to write. Last year we came up with a specification for a computer video game. It could as well have been a guide to online art of a specific genre, or any of a number of other subjects that interest the students. I place requirements for what needs to be in the document on the basis of what would make a good document of that type, not on the basis of the mechanics of the programs we use to create its content.
As we progress from word processing to using basic basic tools for text editing, file management, system monitoring, and so on I use the same approach. With a goal in mind that interests the students, we can focus on that and learn the technical mechanics along the way.
The result is that the students get an opportunity to create something of their own, not just another copy of a document given as an assignment. They get to learn what they learn for a reason, not as an end in itself to fill grading checkboxes.
I allow the students to express themselves with their assignments. We keep things within the bounds of propriety, of course. At first they're tentative, but by the time we're doing web pages they've usually gotten to by outspoken. I don't grade them on what they say on their web pages, but on how they say it. What features of HTML have they applied, how does the page appear as a whole, and so on.
Then, after each unit in class I have a little surprise. I take some time to list for them all the technical skills they've acquired. All that stuff that goes on resumes or that gets listed as a class subject on the syllabus. The lists are long, and full of all sorts of scary polysyllabic words. We go through each scary word, and they learn that that scary word is something they know and can handle. We review their work, I comment on it, and their peers do, too.
They discover something. It's something we keep discovering it over and over in class. A message that bears repeating, over and over.
They can create.
Labels:
philosophy,
Teaching Computers
Subscribe to:
Posts (Atom)