Plan First — What’s Wrong With (many) Programming Courses

There’s a big difference between learning a foreign language (e.g., French) and learning a computer programming language.  Sure , any computer language is much simpler, trivial even, compared to, say, French.  No, that’s not it.

You already know a language and you’ve used it for years.  If you want to talk to someone, you know what you want to say — just not how to say it in the new language.

If you are new to computer programming, it’s different.  You don’t have a language that you know, and worse, you really don’t know what you want to “say” in specific enough terms.   Human language is (way) too general for a computer.

Courses use examples like “Add two numbers”, say “Hello World” — Mostly, trivial problems that you really don’t need a computer to solve.  Necessary at first, in order to learn syntax and other rules, but that’s putting the “cart before the horse”.  How to think “like a computer” comes first.

My goal is for you to learn an appropriate thinking process.  I started by acquainting you with the LRC computer.  You now, should now know two things:

one: How much of a pain it is to program in the computer’s machine language, and
two: How precise you must be. Missing a single step in the procedure (program) makes it fail.

If I asked you to write a program that inputs 3 numbers and outputs the sum of the 2 largest, what would you do?  How about, “I’d look at the numbers pick the 2 largest and either add them in my head or use my trusty calculator.”  Don’t need a computer.  Next!

But now, suppose I said, “Here’s a file with about 10,000 numbers and I’d like to find the largest 100 and sum them.  Oh, some of the data may be corrupt.”  Uh, “corrupt”?  (usually means symbols or letters instead of numerals.  Hard to add 3.14 to A (sorry)

Could you do it “by hand”?  Sure, but it would take awhile — and how sure would you be of the answer?  Probably have to do it at least twice.  If the answers are the same, a sigh of relief — if not, bad words?  “You did it?  Good, here are a few more.”  Ugh, you need some “mechanical” help.

How would you go about making up a recipe (program) for a computer to do this?  Think of cooking. What are the ingredients and steps involved?  Only now you have to make it automatic from start to finish.  You are not watching the pot boil. (I know, watched pots never…)

How about this approach as a first try.

  1. Input the file name.  (assuming it is in the memory somewhere)
  2. Input the first 100 numbers.
  3. Store them (actually copies) in a list of 100 numbers.
  4. Somehow order them low to high.
  5. Input the next number.
  6. See if it is larger than any of the 100 stored.  Start at the low end replace the first one found.   Hmm, do I have to re-order for this to work for the next number?  Maybe re-sort the list with the new number and throw away the smallest?  That might work.  Think about it.
  7. Go back to step 5 until I run out of numbers. then sum them and output the answer.

BTW, no self respecting 🙂 computer can even understand these steps.  Any computer can only understand commands in its own language. (Look up super-dumb in the dictionary and you’ll see a picture of a computer.)  You must tell it EXACTLY what to do — step by (agonizing) step..

Looking back at these steps, what about corrupt data? Maybe add some test in step #5 to make sure the input is a number and not something else — like letters or symbols. What if he lied and the file only held, say, 80 numbers?  Should your program handle that?  How do you add up 100 numbers if there are only 80?

Need the “Hey bone-head, you only gave me 80 numbers!” message to go back to the user (somehow).

All of this thinking might work, but it has to be translated into computer language.  Then (with about 100% certainty) it won’t work — at first, but after a few, “Gosh (non-literal translation), I forgot that”, along with some “I don’t understand why it’s not working”, the program will work.

Old saying: “Hardware eventually fails, but software eventually works.”

Another question:  How do you know if it really works?  Do you just examine all 10,000 and the 100 gleaned from it?  Try it for a much smaller list with and without errors?  What does “work” really mean?  Classic answer: “It depends!”  (Got me through school)

Does it work?  Not a trivial question, for sure.  If there is corrupt data (and you don’t test for it) the computer will just stop.  It won’t tell you why. (but you can guess).  If you can say, “Hey, Yo-Yo your file is corrupt, get me one that isn’t”, then problem solved!  (you might not get paid, though)

Error consequences:  Important.  If it’s just sorting lists or moving icons on a screen, then no biggie.  But, if you are moving a physical object, could be disastrous.

Any “real world” application takes lots of planning , thinking, testing, and yes, judgment.  The actual coding process is maybe 20% of the work.

The application may have to be fixed and probably modified in the future.  Good to write the code and documentation following clear methods and accepted standards for the folks who will be involved.

Anyone talk about that in our schools?  Hopefully, but I’ve seen little, if any, of that.

It’s “Move (your avatar) past the bad guys and get the gold!” — voila, you are a programmer (and a star!).

Not!

BTW, I (or you) will write a program to do the “sum of the 100 biggest” in a video. How about two programs.  One to see if the file of numbers is clean, and another to do the selecting and summing?  Simpler and cleaner?

Yes. (it’s divide and conquer — great strategy)

 

 

 

Print Friendly, PDF & Email

Join the Conversation

1 Comment

Leave a comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.