“Growth Mindset” and “Participation Trophies”

I ran across an article about students last week (here) that discussed “mindset’ and in particular, “growth mindset”.

Some points:

  1. Your belief about your own intelligence has a big impact on your learning behavior and whether or not you can be an effective self-directed-learner.
  2. If you believe that a subject is “too hard”, you won’t even try to learn it — and maybe even put-down those who have mastered it.
  3. Conversely, if you are told that “It’s easy” and you find it difficult, that enforces your, “I’m really not that smart” feeling.
  4. If you can convince students that learning is a process (and mistakes are part of that), you’ll find that they work harder, are less easily discouraged, and learn more.  That’s the “growth mindset”.

My addition: Just showing up and getting that “I occupied space, even tried, and got a trophy” is counter-productive and should be discouraged.

Have you ever learned a craft, a musical instrument, or how to sew?  When you started you could barely do anything productive.

But, you kept at it. Now, you are routinely doing things that you could barely imagine making when you started.  You might also remember frustrations, and  how many times you almost quit.  Such is the stuff of learning anything worthwhile.  It was hard, but you did it.

Does learning to program a computer fit in here? Sure, it’s great training.  Definitely a metaphor for many things.  You start completely clueless and soon, you learn a language and can solve complicated problems relatively easily. You learn, from the get-go, that your initial effort always fails.  It’s just part of the process.  Finding those sneaky little errors takes some real detective work, but you learn how to do it.

What’s really fun (and satisfying!) is finding the errors in other’s programs.  Don’t gloat too much.  You’ll be on both sides of that equation — many times.

Best of all, you get a great feeling of accomplishment when your program works.  The computer doesn’t know that your family contributed a wing on the local hospital — or that one of your relatives (even you!) smuggles drugs.  Computers just don’t care.  A computer just follows your instructions — exactly!

It’s just you and the machine.  It’s 100% on you. (well, sometimes the computer actually has a “bug” or that some mysterious, evil gremlin (usually from outer space) changes what you wrote so that the program doesn’t work. — but that’s rare.)  The fault lies with the person that you see in the mirror — no one else.

The way you learn to program is in small steps.  You write many (all eventually successful) programs.  Many fail at first (in my case 99% of the time), but that’s no biggie. You just mumble a few bad words, find out why it didn’t work, and try again.  Even that’s a rewarding process — be Columbo! (Ok, youngsters, look him up!)

Aside: it’s much more efficient now.  When I learned, I had to punch in the entire program on paper tape (look up TeleType Machine).  That tape was read into the computer (had to wait your turn) then if the program finished (if not, lots of  angry looks from the computer operators), I received a nice roll of paper tape that I put into a TeleType machine so I could look at the output.

After several tries, I finally succeeded.  It’s like my neighbor, who plays golf.  He says, “You always win.  No matter how many strokes you take, the ball always gets in the hole”.

Same deal with programming,  Now it’s all interactive (on-line) and you get almost instant feedback.  That’s great but, that can promote very sloppy programming habits — so “no-free-lunch”.  I’m afraid that the schools tend to ignore those good practices and in fact promote bad ones. But that’s another story.

If you’ve never programmed, go and get your iPad. Download the free Hopscotch app.  Give it a shot.  Fun. Tell me that when you (finally?) get your first program to work, that it isn’t a thrill — at least a little bit?

Teach your kids, or grandkids a little Hopscotch.  Nothing like teaching to learn it yourself.  You definitely won’t need to give them a “participation trophy”.

P.S.  You know that you can leave comments, right?  Don’t be shy.

“Of course he can walk. Thank God he doesn’t have to!”

That was the caption on a satirical cover of New York Magazine (1970) showing a teenager in a wheelchair being pushed by a bejeweled woman with a very large mansion in the background.

Rephrase: “Why learn to write if I don’t plan on being a professional writer?” — more relevant, here, “Why learn about computers, programming, robotics, coding, etc. — if I don’t plan on that being my profession?  I can hire folks to do all that.”

Most of us believe that knowing the “Three-Rs” is required to be a civilized/productive member of society. To what degree, is a judgement call that we all (or our parents) make.  It’s just part of the fabric of life.

We need to add another “R”, now — for Robotics.  Well, at least the computer programming part.  Side benefit: Not only is there a dearth of competent programers now, but the demand is running away from the supply.

That’s good, but my main argument for learning the basics of programming is that it is becoming more and more a part of so many activities.  We’re at the “Model T” stage now. Knowing how to program will be as important as knowing how to write.

Understanding the underpinnings of robotics will give you more options.  Even if  you become a salesperson or financier, the chance that you’ll be involved with robotics will increase.  Knowing the basics (just like having good speaking and writing skills) will be valuable.

But, if you get more technically involved, understanding the fundamentals will give you a distinct advantage.  You will not have to unlearn the bad, limiting habits.

So, as I’ve said before, why not learn these methods and procedures from the beginning?  I plan to help with that.

Again, stay tuned.

 

“Uh-Oh” Robots are about to take over our jobs! + a Puzzle

Scare piece 1,994 (this week), entitled, “The Rise of the robots: Is this Time Different?”.  Actually, it’s a good article and answers the question with a definite, “maybe”.  Always a good answer when forecasting the future. (redundant? Anyone forecast the past? just historians increasing their narrative’s punch.)

The problem here is that a., the future is unknown, and b., it’s easy to see how a new technology will eliminate some existing jobs.  So, because of the “unknown” part, it’s hard to see what new jobs will emerge.  What if they don’t arrive soon enough or are too technical for existing folks.?

Even though I am not a certified futurist, I humbly submit that the demand for “smarts” will exceed the demand for “muscles” (i.e, physical labor)  Maybe someday machines will produce all of our necessities and we will just do “mental” things, like selling, financing, insuring, inventing, creating, etc.  But, who knows?  My Crystal Ball is still in the shop, but its last message was, “Ride the horse in direction it’s going!”

I recently talked with the head of the robotics education at Northwestern University and asked him, “What should kids learn to prepare for a robotics career?”  His answer was something like,” Oh, what you’d expect. Math, electronics, computer science, etc., but the most important thing is to be able to think logically.”

Two books come to mind.  “Thinking as a Science”, by Hazlett and “How to Solve it”, by Polya.  Both have been around for years. They detail a systematic approach to analyzing and solving problems.

It’s important to develop a systematic approach.  If a computer is involved, then add “precise” to “systematic”.  As I said before, if you are not a programmer, the the amount of precision required will be mind boggling.

A little off subject, but I just saw this cute “thinking” puzzle.  Joe and Sam race 100 meters and Joe wins by exactly 10 meters.  (Both are idealized runners — they get up to speed instantaneously and run at a constant rate for as long as it takes.)

After the race, Sam, who lost, says, “lets race again, but you start 10 meters back.”  Joe agrees and they race.  Who wins?

Can you figure it out without algebra?  Hint:  Where will they both be when Sam has run 90 of the 100 meters?

How about solving it with some algebra?  I’ll show you (at least one way to do it) in my next post. (follows below)

 

 

 

 

“Welcome, Robot Overlords. Please Don’t Fire Us”

Catchy headline (2013).  (article here)  The magic date is 2025 when “they” can build a computer with the processing power of the human brain.

Scared?  Well, our brain is a learning machine and so far the attempt to get computers to learn has not gone so well. NYU professor Michio Kaku put it in perspective, here. Some funny lines, but his main point was that the current state of Artificial Intelligence (AI) is about that of a retarded cockroach.  Maybe in 50 years or so, but there are lots of obstacles to overcome.

Do you remember all of those film clips showing early attempts for flying machines?  Most were attempting to mimic birds by flapping their wings.  Folks spent countless hours trying to figure out how to increase “flap-speed” and make the whole airplane strong enough to do it.

Seemed a reasonable approach at the time, but looking back, it was clearly wrong.

Here’s my take.  Computers are really good at things humans are horrible at and vice versa.  So consider advancements along the lines of things computers do well.

Computers: Good at calculating and retrieving information. Horrible at thinking or judging what’s right/wrong.

Humans: Good (sometimes!) at thinking and judging. Horrible at calculating and retrieving information. Plus we hate doing it.

So no problem, right?  Not exactly.  Think of any job.  What part of it is just routine, tedious, and repeatable.  That part may go away and soon.   There are lots of economic incentives to make that all happen.

It’s clear that computer controlled machines will have a huge impact on our lives.  There will be lots of “panic-type” articles and warnings from experts.  Many will just be from the doom & gloom industry trying to increase it’s publicity market share.  But it’s happening and the rate of change will increase.

How about we get ourselves, our kids, and our grandkids ready to deal with this future.  How about our public schools?  Will they help?  Sure, but the current emphasis on the “it’s really easy and fun” approach isn’t going to change soon — the current regulatory environment makes it very difficult to innovate.

School classes have a goal of the students passing tests. Higher scores enhance the reputation of both teachers and schools. That’s OK for background information, but not for actually becoming a competent programmer.  You have to be able to actually write the program and get it to run on a real computer.  It’s a craft.  Just passing tests doesn’t cut it.

The real world awaits.  Actually, it doesn’t “wait”.  It’s all happening now and we all better get ready.  There will be so many opportunities — most, we can’t even think of now.

Remember the first cell phone?  Look now. Who would have thought?  Betcha it will be the same with robotics.

“Learn Brain Surgery in 2 Weeks — It’s Not That Hard”

Well, I made that up — I did not actually find that headline. However, I see a lot of the, “Learn ‘X’ in 2 weeks”, where ‘X’ is almost anything, French, Python, Java, cellphone apps, etc.  Folks sign up, go through the process and are happy that they can “speak some words”.  Possibly, even do something useful :-).

My “Brain Surgery” headline is so absurd, not because it is fundamentally different, but because the consequences are so much more severe.   The principles are the same.

The consequences of “Java in 2 weeks”, are minimal. Maybe your program won’t work at all or will move a screen icon right instead of left.

So if rigorous “safety” techniques aren’t taught, it’s no big deal.

But, what if your program were controlling a robot or machine that could damage property, or harm a person? Might want to know about appropriate methods to minimize those kinds of possibilities?  You think?

The stated arguments for not dealing with these methods initially is basically, that it’s boring and will “turn folks off”. We’ll lose them.  All of that can be learned later.

But I believe that there’s another reason.  The folks building and teaching these courses have never built programs where the consequences of errors were of any major concern. Building a game, printing out numbers, or moving icons around on a screen is not, well, dangerous.  If it doesn’t work  — just quit, change the code and run again.

If that’s your world, why bother?

A software product that is used by folks that did not build it, is an entirely different manner.  Even if there are no dangerous failure consequences , it better work well, or users will quit using it.  Also, might ask for their money back.

Real programs have errors, need fixing from time to time — or need to be enhanced.  All of that has to be done by people — often not the ones that built it.

Adhering to proper standards and proven methods can make the fixing and enhancing much easier (read: possible!).  There is a “rule-of-thumb” that the chance of making another error while fixing a problem is 30%.  With proper techniques that can be reduced.  Generating programs with very few errors is very important, but difficult.

Does anyone remember the Obamacare sign up sites?  All fixed now?

Why not teach proper standards and methods at the beginning?  Is it possible?

Yes.

2nd Video & Overall Education Plan

The first video, LRC01A described the LRC (fictitious) computer, some of it’s language commands, the fetch-decode-execute cycle, a simple, add-two numbers program, and a homework problem to add 3 numbers.

The LRC computer uses decimal numbers.  That’s one fiction.  Current computers use binary numbers.  I’ll put up a video explaining how number systems work, but that is not necessary to know now to understand the basics of a computers functionality.  Sure, I could do it in binary, but the input command, 901 is easier to remember than 100100000001. In my example we added 32 and 5 to get 37.   That would be 100000 + 101 = 100101.  A bit hard on our eyes, but not for any computer’s!

I won’t repeat the old joke (out loud, anyway) “There are 10 kinds of folks in the world.  Those that understand binary and those that don’t.”  🙂

As you’ll see in this second video, things get tedious very quickly with machine language (even using decimal numbers).  Because ultimately we’ll be dealing with computers that control and interact with robots (or any physical machine), it’s important to understand how the computer works.

You won’t be exposed to much machine language.  You’ll not need it unless you get involved with the nitty gritty — which you may never, and if you do, can learn the specifics then.  Any computer that you come across will have lots of registers, commands, but will have the same kinds of tasks to perform.

Summary:  from the first video, LRC01A, you should now know the following items:

  1. Have a mental picture (you drew it!) of a room with a little robot that runs around following commands.
  2. The commands, 901 ,902, 3xx, 5xx, 1xx of this computer’s (machine) language,
  3. How the program to add 2 numbers works.
  4. How the robots fetch, decode, execute cycle works.
  5. Hopefully, how to write a program to add 3 numbers.
  6. Maybe wonder why there aren’t commands to get inputs directly into the mail boxes (without going through th A register) or being able to do some calculations directly on the numbers in the mail boxes.
  7. I didn’t talk about that last point much.  It is certainly possible to wire up a computer to do that, but it gets very complicated and making the A register the conduit greatly simplifies the circuitry — at the expense of more steps in the program.
  8. However, computers are so fast that the “extra” steps cost little.  There are also some time advantages in modern computers with their many registers because many fetch, decode, and execute operations can be done in parallel.  But that’s a another story.

For this video LRC02, here’s what I’ll cover:

  1. How the simulator (on the internet) at www.peterhigginson.co.uk/LMC works.
  2. The rest of the machine language commands.
  3.  The 3 number add program.
  4.  Generalize to add more numbers using a  loop.
  5.  Code for “add numbers until a zero input”.
  6.  Assembly language to make life easier (for us).

The plan will be to go from the LRC construct to general purpose graphical and higher level textual languages that work on our modern computers.  My method will be to expose you to many problems.  Some we’ll solve together, some on your own.  The only way to learn these languages is to write programs and get them to work.  Step at a time.  Been in the software biz for a long time.  I won’t mislead you.

After awhile I’ll introduce you to some simple robots (Sphero, Lego MindStorms, GoPiGo, etc.)  You’ll then experience the fun (and frustration!) of working with objects not on a screen.

They’ll be some side trips along the way, but the internet is a very rich source of information.  If one of my explanations is not your cup-of-tea, then I bet that you can find one more to your liking by searching (googling?).

It’s all about self-learning.  That’s what actually happens anyway.  Even if you are exposed to the best teachers in the world you have to do the learning part — alone! (horse to water and all…)

Learn the fundamentals and then you can take off!

Here’s the second video, LMC02 (Will change to LRC02 soon)

Also follows LRC2A-S, a description of the simulator on YouTube.  Go to the simulator and put in your own programs.  If this is all new to you, just hang in there.  Just using numbers to program is like going to a foreign country hearing the language for the first time.  Easy to make mistakes.

That will change soon — it will get easier.

 

 

Computers, People: Why can’t we just, well, talk?

Ever go to France with a guidebook and French-English dictionary and try to have a conversation with a native? (I’m assuming that you are a native English speaker and know little or no French).  “Why do they talk so fast?” 🙂

While it may be possible to order a meal or get directions, it’s impossible to have a real back and forth conversation.  Very little information can be exchanged in “real time”.  Discussion?  Impossible.  Every time they say something (even if you understand it) you have to stumble around looking stuff up, etc. before you can respond. (Nodding & smiling often works!)

Discussion means back and forth flow.  What you say affects my response.  If there is more than a very brief time between responses, we get frustrated.  How about watching TV news when folks have to “go through the satellite links”?  Even that second or so is annoying and makes it difficult to exchange information.

We usually talk in “full-duplex” mode.  You and I can both talk and listen at the same time.  We can also react to each others facial expressions.  If you have to do the “over and out” walkie-talkie thing not much gets discussed.

However, I can send you a detailed recipe for baking an angel food cake,  Even if you are many miles away, and I send it by mail.  You will be able to bake it — if my recipe is good — if not, then disaster. (sorry, I meant 1 3/4 cups of SUGAR not SALT)

Precision gets very important when you can’t have a real time conversation.  Same with computers, where  conversation is not  possible.  Why?  The so-called “time scales” are radically different.  Here’s a way to visualize the problem.

Imagine that you are somehow transformed into the world of a computer’s CPU. (Central Processing Unit — That’s the part that does all of the calculating.)  The time to calculate is measured in  milloniths or billionths of a second.  Lets make that calculating time equal a second or so and adjust all of the others times.

You have a few pieces of data in your head that can be recalled instantly.  (registers in the CPU).  You also have information lying on your desk (L1 cache storage) that’s a couple of seconds away,  You also have a file cabinet (L2 cache) that takes a few seconds to access.  Possibly some files in the basement (L3 cache) that you can get to in a few minutes.  Hopefully, all of these are organized well enough that you don’t have to take too long finding what you want.

If what you want is not there, then maybe a trip to the library (RAM storage – Rapid Access Memory).  It will take you a few minutes to an hour, depending on distance, and how hard it is to find the information.

But, what if you need some input from a person?  You send out a request.  You won’t hear back for maybe 3 or 4 years.  (You could learn to be somewhat fluent in French in that time!)

My point is that the time scales between computers and humans are so different, that real time conversations with the CPU are impossible.  You have to communicate via recipes, called programs.  Else, the computer’s CPU will just be sitting around doing nothing almost all of the time.

Good recipes unleash all of the super fast (millions of calculations per second) computing power.  But if there are errors…?  Great line: “A modern laptop computer can make more mistakes in 1 second than 1000 tax accountants can in 100 years!”

What about those robots (at the museums) that talk to you?  Those programs are very complex and have massive lookup tables of responses.  There are programs to digitize your speech, sort out the essentials, find them in a massive dictionary, and compute a response from some recipes — like a very fancy FAQ.  Very difficult to do.  Impressive stuff.  There are even versions that work on an iPhone (e.g., SIRI).

Amazing? For sure, but realize that underneath it all is a bunch of recipes (programs) and computations by a totally passive (yes, dumb) piece of electronics that has all of the thinking power of a stone.

 

 

Errors, Side Effects and Unintended Consequences

Pretty broad subjects.  Difficult, too. in that they are largely unknowable. (If you knew there was an error, you would have fixed it!)

Can a program be error free? Probably not. (Old joke: The only error-free program is one that is not used).  However, if you make a programming product that other folks use, you get errors pointed out quickly — and not always politely!

To minimize errors you first must design the program properly.  There’s definitely an art to doing it.  Clarity is key — and like any human activity, some are better at it than others.

Next, the code itself should follow standard conventions so that it is, well, readable.  Readable?  Yes, it is so easy to write “clever” code, that works, but in two weeks will look like gibberish.  “Gee, what was I thinking of?  Oh, now I see. But what is that variable?  A typo?” If you’ve never programmed, this scenario is hard to imagine.  If you have — you’ll be nodding!

As to side effects and unintended consequences, these are difficult to estimate before hand.  With experience and, again, proper methods, some can be identified — best at the design process, before any coding takes place.

To illustrate, Consider an overly simple example:

We want to build a program that will let someone input numbers, one at a time and when a zero is input, add them all and output the sum.  Pretty simple.  Only tricky part is that the program doesn’t know how many numbers to expect.  It has to “look” for a zero to know.  (BTW, that would be a disaster in a large file of numbers, because a zero could be anywhere. You wouldn’t want to get a partial sum, just because one of the data items is zero.)

OK, I build it and, hooray, it works.  For a test, I input 3, 7, 22.3, 0 and get a printout that says, 32.3.  Perfect.  All done.

I now give it to some friends and they input 42, 3, 16.5, #44, 0. What happens?  I get a phone call.  😥  “Are you kidding? That program doesn’t work!” They send me the input data (and clever fellow that I am) I spot the ‘#’ immediately. What to do?  My “perfect” program works — as long as the inputs are only numbers.  It doesn’t know what to do with the ‘#’ sign. Responding, “Hey, careless people, check your inputs!” might not be optimal!  Good way to lose friends (and customers!)

Probably the simplest solution is to modify the program. Put in an “If” clause that checks to make sure each input is a number.  If not, stop, tell the user, and ask for a correct input. (That would take more code than the original program!)  If the program is just for me, I don’t need that kind of “hand holding”.  I know the limitations.  Plus, I don’t want to bother inputting (and testing) the error catching code.

Here’s another potential problem — but much more subtle. Suppose that my code uses an internal memory location to hold the partial sums.  So, as each number is read in it just adds it to whatever is in that location and stores the sum back.   Certainly a reasonable way to do it.

But suppose that I make a mistake and, by accident, select a memory location that just happens to be inside another program on your computer.  Now, my program works fine, but when you try (maybe next week) to use that other program it fails.  (At least you’ll never know that I caused it!) We call that a “mad-bomber” error. Fortunately. most languages and operating systems now prevent that from happening.  (Don’t tell anyone but it still can happen.)

I’ve been in the programming business for many years. Worked with some very smart folks..  We all have horror stories of an extremely well tested product failing almost immediately when others (who weren’t involved in the building) started using it.  It’s very difficult to think of all of the possible side effects and errors.  Building software products can be very humbling!

Over the years, professionals have developed good practices and methods  to minimize many programming errors.   But we still need clear, logical thinking.

That kind of thinking must start at training’s beginning.

 

First Video (Little Robot Computer — LRC01A on YouTube)

My grandkids just sent me an email, “ABOUT TIME PAPA!”.

This first one describes a fictitious computer.  Not real, but it has elements common to all modern computers.

I start with the computer and it’s language rather than at a higher level programming language more accessible to humans.  Why? 3 reasons:

  1. No matter what language or style you use, it somehow has to get converted into the machine language of the particular computer.  You might say, “So what? I don’t need to know that.”  Maybe true, but I believe that you should know the basics of how a computer actually works.  Especially considering my other reasons.
  2.  If your software ends up controlling or interfacing with a physical robot, then the details of how the computer actually works may be needed to get a better result — or more likely, to find out why your programs aren’t doing what you thought they would.
  3. A physical robot can be dangerous.  Safeguards have to be built in.  Many of those involve interrupts that require detailed understanding of the controlling computer(s) — could be more than one!

There’s another reason, too.  Programming requires a way of thinking that is so much more precise, than that needed for normal activities.  Even with this very simple computer you’ll see how careful you must be and how quickly things get complicated.  Think of it as mental calisthenics.  Start with “one pound weights”. Exercise carefully.

Here’s the link to this YouTube video, LRC01A

There will be many more, including links to a simulator for this fictitious computer.

 

 

Is a Washing Machine a Robot?

Sure, but it’s a little limited “going to it’s left”  🙂  How about a “spam filter”, or one of those remote controlled cars?

Most references talk about entities that are autonomous and those that are not.  When I talk about the “robotics-revolution”, I mean robots with the following 4 broad qualities:

1. Something physical that can be touched, felt, etc.
2. Autonomous — makes decisions and takes actions on it’s own — not remote controlled.
3. Senses it’s environment.  Gets data (and does something about it.)
4. Has tasks to perform (e.g., not run into a wall?)

Number 2 is the tricky part.  There has to be a “brain” of some sort — practically, a computer and it has to be programmed. And there is the rub.

This kind of programming can be much more complex than just moving things around on a screen.  I’m not disparaging that level of programming — those games that bring in jillions of dollars are extremely complex.  However, adding physical objects that make their own decisions in the real 3-D world is another order of difficulty entirely.

There are many remote-control devices, that certainly require sophisticated technology.  Much of what I’ll be talking about & teaching applies, but my emphasis will be to help prepare folks for working with autonomous robots.

Getting data from the environment and then doing something appropriate (e.g., robotic car sensing a red light at a corner, then stopping.).  The programming of its “brain” can be extremely complicated and the consequences of program errors can be catastrophic.

Again, and you’ll hear it often, very high quality building standards must be adhered to — just like those necessary for building a skyscraper.

Most “real world” projects involve lots of folks, including multiple developers.  They have to be able to read each others code.  (Remember, code is for the computer, but also for people — both current developers and for future fixes, enhancements, etc.)

Standards and “Zero Defects” techniques must be implemented — so why not teach beginners — at the, uh, beginning?