LRC03 video and Some Advice

Two new concepts. Looping and Decisions.

Looping: Doing the same procedure over and over with different inputs and ultimately outputs.

Decisions: Thinking?.  Hardly, it’s only branching or jumping by command.

One command tells the machine (or our little robot) to stop taking commands in sequential order and jump to another place if the accumulator (A register) holds zero.  Another if it holds a positive number.  No smarts, just rules — obeyed without thinking.

Actually, I just misspoke.  What actually happens is that the computer gets it’s next instruction (command) by going to a register (program counter) that tells it where to go for its next command — as always.  OK, what if one of the commands changes the contents of the program counter register?  You then get a jump to a new place with the next cycle.  No thinking involved.

Advice:  You have to learn stuff yourself.  Use the internet — an amazing resource.  Stumble, experiment, mess around, fail.  Cost is very low.  We’re all an experiment-of-one.  Learn how you work best.

Your brain is a learning machine.  The super-duper calculator analogy is false.  How did we learn one of the most complex skills, our language?  Elite professors, latest methods?  Wonder how much later newborns would learn to talk if they were “taught” by elite experts just after birth? 🙂

Exposure, repetition, and desire gets you pretty far.  No government regulations or approved methods needed.

For example, I bet that if you stuck with it, using just this simple LRC language (10 commands and only 100 storage locations) you’d be able to solve all kinds of tricky problems.

If we removed the 3 digit and 100 storage location limits, you could solve very complex problems.  It would be a real pain, but doable.

In the early days, all we had were machine languages (in binary yet) but we were able to simulate things like non-linear satellite orbits (not a trivial mathematical problem either).

One more:  Learn something about the underpinnings of whatever subject you are studying.  You will never (99.99% probability) deal with a machine language.  Maybe an assembly language (where mnemonics replace numbers for commands, like ADD 90 instead of 190), but even that’s doubtful.

You’ll no doubt use higher level and specialized languages — if you do actual programming at all.

You now know that whatever computer language you use, it must get (somehow!) translated into a machine language for the particular computer that’s involved.

Aside: Building these translators for different computers is a good business and they need folks.

All of this is also training your thinking.  Dealing effectively with a super-dumb machine that can calculate and execute commands in a billionth of a second requires very precise recipes — that must come from precise, careful thinking.

Could that be the most important part of all your effort?

Yes.

 

New Important Breakthrough (Didn’t see this coming)

1APR2016 from my research notes:

Every once in awhile I come across something that will have far ranging effects on our technology — and ultimately on our lives.

This discovery, which has been going on in secret for many years is finally coming to light.  An old friend from my grad student days just sent me the information

Here’s a video that describes it.  Don’t be surprised if it seems impossible at first. Check out the responses to get a better feel for how important and far reaching this is.  Warning: some crude language in the responses/questions.

More to come on this great invention/development.

Plan First — What’s Wrong With (many) Programming Courses

There’s a big difference between learning a foreign language (e.g., French) and learning a computer programming language.  Sure , any computer language is much simpler, trivial even, compared to, say, French.  No, that’s not it.

You already know a language and you’ve used it for years.  If you want to talk to someone, you know what you want to say — just not how to say it in the new language.

If you are new to computer programming, it’s different.  You don’t have a language that you know, and worse, you really don’t know what you want to “say” in specific enough terms.   Human language is (way) too general for a computer.

Courses use examples like “Add two numbers”, say “Hello World” — Mostly, trivial problems that you really don’t need a computer to solve.  Necessary at first, in order to learn syntax and other rules, but that’s putting the “cart before the horse”.  How to think “like a computer” comes first.

My goal is for you to learn an appropriate thinking process.  I started by acquainting you with the LRC computer.  You now, should now know two things:

one: How much of a pain it is to program in the computer’s machine language, and
two: How precise you must be. Missing a single step in the procedure (program) makes it fail.

If I asked you to write a program that inputs 3 numbers and outputs the sum of the 2 largest, what would you do?  How about, “I’d look at the numbers pick the 2 largest and either add them in my head or use my trusty calculator.”  Don’t need a computer.  Next!

But now, suppose I said, “Here’s a file with about 10,000 numbers and I’d like to find the largest 100 and sum them.  Oh, some of the data may be corrupt.”  Uh, “corrupt”?  (usually means symbols or letters instead of numerals.  Hard to add 3.14 to A (sorry)

Could you do it “by hand”?  Sure, but it would take awhile — and how sure would you be of the answer?  Probably have to do it at least twice.  If the answers are the same, a sigh of relief — if not, bad words?  “You did it?  Good, here are a few more.”  Ugh, you need some “mechanical” help.

How would you go about making up a recipe (program) for a computer to do this?  Think of cooking. What are the ingredients and steps involved?  Only now you have to make it automatic from start to finish.  You are not watching the pot boil. (I know, watched pots never…)

How about this approach as a first try.

  1. Input the file name.  (assuming it is in the memory somewhere)
  2. Input the first 100 numbers.
  3. Store them (actually copies) in a list of 100 numbers.
  4. Somehow order them low to high.
  5. Input the next number.
  6. See if it is larger than any of the 100 stored.  Start at the low end replace the first one found.   Hmm, do I have to re-order for this to work for the next number?  Maybe re-sort the list with the new number and throw away the smallest?  That might work.  Think about it.
  7. Go back to step 5 until I run out of numbers. then sum them and output the answer.

BTW, no self respecting 🙂 computer can even understand these steps.  Any computer can only understand commands in its own language. (Look up super-dumb in the dictionary and you’ll see a picture of a computer.)  You must tell it EXACTLY what to do — step by (agonizing) step..

Looking back at these steps, what about corrupt data? Maybe add some test in step #5 to make sure the input is a number and not something else — like letters or symbols. What if he lied and the file only held, say, 80 numbers?  Should your program handle that?  How do you add up 100 numbers if there are only 80?

Need the “Hey bone-head, you only gave me 80 numbers!” message to go back to the user (somehow).

All of this thinking might work, but it has to be translated into computer language.  Then (with about 100% certainty) it won’t work — at first, but after a few, “Gosh (non-literal translation), I forgot that”, along with some “I don’t understand why it’s not working”, the program will work.

Old saying: “Hardware eventually fails, but software eventually works.”

Another question:  How do you know if it really works?  Do you just examine all 10,000 and the 100 gleaned from it?  Try it for a much smaller list with and without errors?  What does “work” really mean?  Classic answer: “It depends!”  (Got me through school)

Does it work?  Not a trivial question, for sure.  If there is corrupt data (and you don’t test for it) the computer will just stop.  It won’t tell you why. (but you can guess).  If you can say, “Hey, Yo-Yo your file is corrupt, get me one that isn’t”, then problem solved!  (you might not get paid, though)

Error consequences:  Important.  If it’s just sorting lists or moving icons on a screen, then no biggie.  But, if you are moving a physical object, could be disastrous.

Any “real world” application takes lots of planning , thinking, testing, and yes, judgment.  The actual coding process is maybe 20% of the work.

The application may have to be fixed and probably modified in the future.  Good to write the code and documentation following clear methods and accepted standards for the folks who will be involved.

Anyone talk about that in our schools?  Hopefully, but I’ve seen little, if any, of that.

It’s “Move (your avatar) past the bad guys and get the gold!” — voila, you are a programmer (and a star!).

Not!

BTW, I (or you) will write a program to do the “sum of the 100 biggest” in a video. How about two programs.  One to see if the file of numbers is clean, and another to do the selecting and summing?  Simpler and cleaner?

Yes. (it’s divide and conquer — great strategy)

 

 

 

Programming “Petting Zoo” (Hour of Code)

I see that grade schools all over the world participated in an “Hour of Code” last week.  Lots of PR.  Kids had a great time. “Hey, I programmed a Star Wars game on my iPad!”  In my town, a person called, “information literacy instructor” said, “It’s all about critical thinking skills and logic.  They are learning 21st century skills.”

In an hour. Wow, didn’t realize it was so easy.  On the other hand, we have very smart kids at our schools.  I’ve been told that they all are “above average”  😉

Seriously, I think is is excellent that kids get exposed early to technical things, but I wonder if they aren’t getting he wrong idea.  I asked one of them to explain how she built her game and how it worked.  Her answer invoked a memory of John Cleese (Monty Python) explaining how to play the flute.  “It’s very simple.  You just blow over this hole, here.  Wiggle your fingers, there — and the music come out — easy!”

I also asked, “What’s next?’.  “Nothing at school — no time”. It’s analogous to the Chicago Symphony oboist giving a demonstration in class, but not having any programs to teach the kids how to play.

Why not follow up the fun introduction to programming, with an on-line set of courses that will build the necessary foundations?  Kids can then learn at their own speed, time, etc.  Programming is abstract enough that it can be learned that way.

There are 70 million hits for the Google search of “learn C programming”.  So, the courses are out there.  Right?

For sure, but which ones to take, and in what order?  I’ve looked at a bunch of them, and most are very good, but limited to the specific language. However, kids need an interesting approach that doesn’t trivialize how important learning fundamentals are.  (Think about building a house. Designing that 2-story entrance hall with the curved staircase is really fun and challenging, but what about the critical foundation and basement design? — can that be as fun and interesting?  Does it have to be?

Stay tuned.  I’m going to give it a shot.  I tried an approach on my grandkids this last summer.  They stayed interested and learned a lot of quality concepts and skills.  I believe that they can build on these and better understand how languages work and how a computer might interface with real world objects.

Test: 2APR (not a joke)

Trying to not load up your (follower) emails with a full post — also noted that the video links do not come through.  I have turned on the “summary” switch instead of  “full posts”  But I have no control of how much text is in the summary.

Some random musings.

I’m amazed at the ability of the media pundits to mind read.  I know that there is a lot of ESP type research going on at universities.

They should dump all of that and just study the media folks.  “What was X thinking?”  Where X is almost anyone in the news.  There are no shortage of answers to that question.

 

I’m amazed a how they know what X was (is?) actually thinking.  Maybe just hanging around with those folks does it.  Osmosis sort of thing.

Another:  What do you think the time over/under (after you turn it on) for any newscast is before the word “poll” gets mentioned?

I’m guessing 10 seconds.

 

Just checked the test out.  Seems to work fine.

back to work on some videos.

Does Joe Still Win even though he has to run 10 meters more?

Yessir!  When Sam is at 90 meters so is Joe.  They are tied. Remember that Joe (faster) runs 100 meters in the same time that Sam runs 90.  Because they both have 10 meters to go, Joe being faster, wins.

That works, if you happen to “see” it.  How about using a more systematic approach?  Use the formula D = R * T, or R = D/T or T = D/R, where D = distance, R = rate, or speed. and T = time (the * means multiply, and the / means divide)

Here’s a way to think about it.  What do we want to know?  Is Joe’s time less than Sam’s.  So we have to figure out how long each takes in the second race — but we don’t know the times or rates of the runners.  We do know (better: can figure out) the relative rates.

You get that from the first race. (we’re assume the same speeds in the second race).  Then using that information you can calculate the relative times, T, for each — for Joe to run 110 meters, and Sam to run 100 meters.  Notation: I’ll use (j) for Joes’ variables and (s) for Sam’s.

From the first race:

Joe: R(j) = 100/T,  Sam: R(s) = 90/T  (T = same for both)

Solve both for T and then equate them.

T = 100/R(j) = 90/R(s) so,

R(s)/R(j) = 90/100.   (Sam runs 90% as fast as Joe).

Now for the second race: Here the times may be different. Solve for the time for each, and see if we can get that ratio of rates.  If we can, then we just plug in the 90/100.

T(j) = 110/R(j) and T(s) = 100/R(s)

Now, divide the entire two equations:

T(j)/T(s) = [110/R(j)] / [100/R(s)]

T(j)/T(s) = 110/100 * R(s)/R(j).

Substituting: = 110/100 * 90/100 = 9900/10000 = 0.99.  Joes time is 99% of Sam’s.

I heard you.  “What a pain!  Confusing! Why work so hard to do something that you can just figure out in your head!?”

You are right — sort of.  What happens is that there are many problems, analyses, and opportunities that you can’t just “see” the answer.  You need to go through some logic. Each step builds on it’s predecessor.  You also have to get very good at manipulating symbols — and develop habits that help keep you from making mistakes — especially typos.

Exercise for the reader:  How far back does Joe have to start for them to tie?

This may be overkill, but here’s how I’d solve the problem in a more general way.  (Parens hold current problem’s data)

Let d = race distance (100 meters)

Let x = “loss distance” (10 meters)

let y = “setback distance” for the second race (10 Meters)

From the first race we know that R(j)/R(s) = d/(d-x).  Makes sense.  Ratio of speeds inverse to ratio of distances.

For second race Joe runs d+y, while Sam runs d.  If you go through the same steps we did before we get

T(j)/T(s) = (d+y)*(d-x) / (d*d),  and if y = x

T(j)/T(s) = (d*d – x*x) / (d*d), clearly less than 1 (could have written d*d as d^2 or figured out how to write exponents  🙂

If you want to find out what y is for them to tie, you just have to set the above expression = 1 and solve for y.

(d+y)*(d-x) = d*d,  divide (d-x) both sides

d+y = d*d / (d-x), subtract d both sides

y = d*d / (d-x) – d.  Plugging in d = 100 and x = 10 we get

y = 11.1111 meters.  You could also expand the (d+y)*(d-x) and collect terms. Then y = x*d / (d-x), plug in the numbers and you get the same answer.

 

Program Taxonomy (sort of)

The, “sort of” is because a rigorous classification of computer languages is probably impossible & a lot of the details fall into the “who cares” category.

From the computer itself to the folks programming them:

Machine Language — all zeroes and ones — and lots of them.  The only one that a given computer can use.  Literally a recipe, but unbelievably detailed.

Assembly Language — specific to a class of computers, but very close to the machine language. Somewhat readable by humans but it is still almost 100% focused on actual computer operations.  The translation into machine language is straightforward, often simple (if you know how that computer actually works).

Higher Level language (text based) — here you can write things like, “z = x + y, or Sales = Price * Volume”.  One of the first was “C” (and yes there was an “A” and a “B”, which still is used, but I’ve never seen it). Examples are C, C++, C#, Java, JavaScript, Python, VBA, and a bunch of others.  These languages must be translated eventually into machine language.  You’ll hear words like “compile”, and “interpret”, and probably others, but rest assured, to “work”, your high-level language program must be converted into machine code.  Not to worry, it’s done millions of times a minute all over the world.  Those programs work extremely well.

Higher Level Language (graphics based) — here you move icons (symbols) around on a screen, hooking them up in logical ways to perform your task.  Again, this set of symbols and their connections have to be converted into machine language.  Not to worry, that’s done all of the time. The programs that do it are as perfect as programs go.  The odds are that you’ll never have to do anything but invoke them.  The translation happens so fast that you may not even be aware of anything going on except seeing the end result.

Special Purpose Languages (SPL) — There are thousands of these.  Any computer controlled machine will probably have a special language.  The make it easy for a user to program the machine.  A good example is any language that controls a machine tool.  The operations are vary specific, move the tool to point P1, drill hole, etc.  Look up APT (Automatic Programmed Tool) as an example.  Robotic prostheses need to be programmed by specialists and developing a SPL or using an existing one allows them to concentrate on the intricate part of connecting the prosthesis to a body part.

Almost all of the SPL’s are written in C  or assembler.

The taxonomy classification gets messy here, because you can generally mix the text and the graphics so I just think of them as at the same level.  Graphic examples are. Scratch ( a neat simpler implementation for iPad is Hopscotch — try it, it’s free, fun & the tutorials are wonderful), LabView (MIT), and the language supporting the Lego MindStorms NXT, and EV3 products.  Lots of development here.  Graphics are here to stay — maybe someday text based languages will go the way of the dinosaurs. But not soon!

In the higher level language area we have another “dimension” and that is the design and logical organization of the code, itself.  This is a big deal, but if you’ve never programmed, what follows will mean little to you.  You may even wonder why it is so important.  There are 3 ways to organize your code. (hint: pick the second one!)

Procedural Programming (PP) — How they all started.

Object Oriented Programming (OOP) — The good programmers did this by design, but now the languages support it formally, or even (e.g., Java) force it. Pundits say that OOP should be learned later.  Maybe, but I teach it very early.  Why learn second rate methods?

You’ll see.  We understand our real world through OOP glasses and thinking. (a teaser, I know)

Functional Programming (FP) — Everything is a function.  Look up the language called, Haskell.

The main reason that OOP is so dominant now is that it makes it possible to write more complex programs with fewer errors.  Also, makes it easier for multiple folks to work on the same program — more importantly, by people who will repair and enhance it in the future (probably unknown to the original authors).

These approaches and those needed standards are seldom taught (or even discussed) in our current school offerings or in entertainment pieces like “The Hour of Code” (coming to your school the week of Dec 7).

 

 

 

 

Programming Languages (High Level)

Here, I’m talking about high level languages — not “machine languages” (binary numbers that interact directly with a computer) or even “assembly languages” (one-step removed, with mnemonics replacing the actual numbers.)

Dealing with the low level languages is somewhat like reading those legal notices in the newspaper.  Doable, but who has the time and/or the ability to find and assimilate all of that information.

That’s why the high level languages were invented — for humans.  But always remember, any high level language must somehow be translated into actual machine code for the computer to perform the task.

The languages come in two broad categories:

Text Based: e.g.,  C, C++, C#, Java, Python.

Graphics Based: e.g., Scratch, LabView,  EV3 (used in Lego MindStorms robots)  If you’ve never seen any of these, download Hopscotch on your iPad. Go through a tutorial or two — lots of fun & you’ll get the idea.

They can be mixed together — a combination of moving icons around and interfacing with text based routines.

There’s a taxonomy of approaches:

Procedural (the “old” way)

Object Oriented (the “newer” way.  New = last 20 years or so)

You can use many of the languages either way.  BTW, there is another approach called Functional Programming (Haskell is a language dedicated to this approach).  Almost all of the programs written today use the object oriented approach — and yes, I’ll talk about that in a future post.

Back to the languages:  Ever try to learn a foreign language? The analogy is not all that good. You already know a language, your native one and you can “see” and understand the new one with that knowledge (bias?). Even so, learning a programming language is much simpler.

Sure, you don’t have a reference, (AND you don’t really need any math — just logical reasoning), but the number of things to learn is much smaller — and more clearly defined.  Also, all of the general programming languages have common elements.  It’s not quite “know one, you know them all”, but close.  A close friend, born in the USA, who spoke Italian at home, said that leaning French was easy — but he had a pretty good incentive.  He was courting his future wife who only spoke French 🙂

The hardest thing about computer programming is the required precision.  It’s nothing like you’ve ever done or seen before.  Any little mistake (like forget a semi-colon) and the program fails.  At first it freaks many people out. Also, the computer does not care if you are smart, dumb, rich, poor, parents are famous, or that you know the mayor.  The program works or it doesn’t.  (That’s the part I like best — It’s 100% on you)

The really nice thing is that once you learn one programming language the others can be picked up fairly easily.  They all do the same basic things.

Spoken language analogy. All have Nouns, Verbs, Adjectives, etc.  The syntax is different,  The killer is the slang and idioms.

Programming languages have slang, too, but the problem is much less severe.  But there is no , “Well you know what I mean”, or “It’s on the desk” when it’s really “on the table”. Hand waving doesn’t work at all.

There is zero intelligence in a computer. (I saw an interview recently with one of MIT’s big-time computer scientists.  They were asking him about so-called artificial intelligence (AI) and the worry that robots will soon be smarter than humans.  He said that the current state-or-the-art of AI is about the same level as that of a retarded cockroach.  Long way to go before, “Robot masters please don’t fire us”.

Always assuming that the computer has zero reasoning or judgment ability, will save you grief if you ever start programming.

Can You Use a Stick Shift? Should You be able to?

How about arithmetic?  Do you need to be able to add, subtract, multiply, or divide by hand?  Remember how to do square roots?  (gotcha!)  Why learn — we have simple electronic devices to do that — and more accurately.

Back to the original question.  Knowing how to run a manual shift is required only if it’s on the car that you take the driver’s test in.  I know lots of folks — and most young folks that have no idea.  If you asked them why there are gears in the first place, their answer will be, “gears”?

Some folks say that soon we’ll not even have to (learn to) drive.  Robots will do it or us.  Self driving cars exist now. Getting it to happen is not easy, but not all that difficult either.  One simplifier is that it’s mostly a two-dimensional problem — you do need a “road map”.

So, in general, how much do you need to know and what skills to you need to operate a very complex device?  Take your pick — computer, car, airplane,  software, TV, washing machine, iPad, cell phone, etc.

The answer is (of course), “It depends”.  If the devices are designed properly, then then the level of knowledge that you need just depends on how you are going to use it.

My advice (did you ask?) is to know some things at a level more basic than your current operating level — that way if something goes wrong, you might be able to do something about it (call the help line?).

If you are just driving around in your SUV (automatic transmission) then knowing about and how to use a manual transmission is of no value.  You probably know about filling up the gas tank, or having the brakes checked if they feel a little “spongy”.  But if someone offered to explain how to change the spark plugs — would you even listen?  Who cares?  A valve job?  Forget it.

But if you were thinking of becoming a mechanic, you might realize that you will need to know things like that. Now, back to computers & their programming.

My first computer language was the machine language of the University of Illinois’ computer, “The Illiac” — it filled a large room — were talking 1958.  Since then I’ve programmed using some 15 or 20 different languages.

Should you learn some machine language?  Certainly not necessary, but understand that 100% of whatever you program has to end up as binary machine instructions used by a non-intelligent machine — and they better be correct!

Remember the old line, “A modern computer can make more mistakes in a millisecond than 1000 accountants can make in a year.”

 

3rd Post: Computer Programming — Different for Robots (2of2)

Almost all of the programming taught involves two-dimensions.  The simulated sensor inputs, collision of objects, etc.  can be complex, but it’s trivial compared to actual happenings in the real (3D) world.

Dealing with a moving robot, even just avoiding only stationary objects,  is very difficult, especially if the third dimension is relevant.  (Much easier if the robot is rolling and stays on a floor)  Adding other moving robots or objects gets extremely complicated and difficult to deal with.

But being difficult doesn’t mean it’s impossible.  Some folks have spent considerable time (many person-years) building so-called “physics engines” that can be used to make the programming tractable.  Learning how to use these and other tools will take time — and again need to be used in “standard” ways.

If you examine a professional programmer’s code you will see references to various pre-programmed tools (e.g., a physics engine, or the languages’ libraries) following standards so that another person will be able to change the programming in the most pratical way — minimizing the chances of making errors.

There’s lots more, but the soap box that I’m standing on helps me yell loud and clear about learning programming properly from the ground up.  Might as well do that rather than learn bad habits from “entertaining” folks and methods.

What’s needed is more than just logical thinking — it’s disciplined logical thinking.