LRC03 video and Some Advice

Two new concepts. Looping and Decisions.

Looping: Doing the same procedure over and over with different inputs and ultimately outputs.

Decisions: Thinking?.  Hardly, it’s only branching or jumping by command.

One command tells the machine (or our little robot) to stop taking commands in sequential order and jump to another place if the accumulator (A register) holds zero.  Another if it holds a positive number.  No smarts, just rules — obeyed without thinking.

Actually, I just misspoke.  What actually happens is that the computer gets it’s next instruction (command) by going to a register (program counter) that tells it where to go for its next command — as always.  OK, what if one of the commands changes the contents of the program counter register?  You then get a jump to a new place with the next cycle.  No thinking involved.

Advice:  You have to learn stuff yourself.  Use the internet — an amazing resource.  Stumble, experiment, mess around, fail.  Cost is very low.  We’re all an experiment-of-one.  Learn how you work best.

Your brain is a learning machine.  The super-duper calculator analogy is false.  How did we learn one of the most complex skills, our language?  Elite professors, latest methods?  Wonder how much later newborns would learn to talk if they were “taught” by elite experts just after birth? 🙂

Exposure, repetition, and desire gets you pretty far.  No government regulations or approved methods needed.

For example, I bet that if you stuck with it, using just this simple LRC language (10 commands and only 100 storage locations) you’d be able to solve all kinds of tricky problems.

If we removed the 3 digit and 100 storage location limits, you could solve very complex problems.  It would be a real pain, but doable.

In the early days, all we had were machine languages (in binary yet) but we were able to simulate things like non-linear satellite orbits (not a trivial mathematical problem either).

One more:  Learn something about the underpinnings of whatever subject you are studying.  You will never (99.99% probability) deal with a machine language.  Maybe an assembly language (where mnemonics replace numbers for commands, like ADD 90 instead of 190), but even that’s doubtful.

You’ll no doubt use higher level and specialized languages — if you do actual programming at all.

You now know that whatever computer language you use, it must get (somehow!) translated into a machine language for the particular computer that’s involved.

Aside: Building these translators for different computers is a good business and they need folks.

All of this is also training your thinking.  Dealing effectively with a super-dumb machine that can calculate and execute commands in a billionth of a second requires very precise recipes — that must come from precise, careful thinking.

Could that be the most important part of all your effort?



Print Friendly, PDF & Email