Ricochet Member Recommended FeedRecommended by R> Members

How to Build a Brain (Part 1) – The Challenge

 

How do you build a brain? How should I know? I’ve never built a brain. But I did spend a whole lot of time once thinking about how to do it.

In the mid-nineties, I was working for a software company in Dallas that did software for insurance administration. I was rolling off of the second project I had done there, starting my new job as Research Manager. This was technically a division level job, but my division actually consisted of me, and a part-time admin that I shared with the core Development group. My mandate was to explore various new technologies, in the expectation that at least some of what I did would prove useful and could be integrated into a future product. The projects that I had done are significant, because they had led me directly to the first request I got, and thus into my quest for a brain.

The first of them was an imaging project, designed to produce a paperless office. The idea was that when the mail came into the mailroom, it would be scanned to an image, the image would be indexed, and then the paper was shredded. Then, based on whether it was an application, a claim form, or other correspondence, it would be routed into the appropriate process. My project integrated this capability into all of the company’s existing desktop software systems.

One of the cool features was that when doing data entry into one of those systems, in addition to simply displaying the ‘paper’ form, it would synchronize to the image viewer, and magnify the portion of the form that contained the pertinent information. So, if the data entry screen was sitting on ‘Last Name’, then the portion of the form that contained the last name would be magnified. When the user tabbed to ‘First Name’, the magnified image would change as well.

The second project involved creating a programming language, to serve as a bridge between the components of a massive revamp of the company’s entire product line. I spent about two years total, building out a pile of specialized functionality, but the first version, with handling similar to the BASIC language, was done within 90 days. That included: a Compiler, that turned source code into platform-independent byte-code; a platform-dependent Virtual Machine Runtime module; and an Integrated Development Environment with all the bells and whistles, including an integrated debugger. (It was shockingly similar to the then-current version of Visual C++. Because I did an exact clone of that.)

After all that, I felt like King Kong on steroids.

Which is good, because the first thing the boss man wanted me to do was to go back to the imaging system, and figure out a way to get the data off the paper form and into the data entry form, without someone having to sit there and type it.

What… handwriting?

No, not handwriting, and in fact, since there were people in the business working on handwriting software we immediately decided to move past that and assume there was machine-readable text. No, he wanted me to figure out how to read the text and fill out the forms properly, regardless of the verbiage the people used, taking us into the realm of software known as Natural Language Processing.

Put simply, the typical approach in NPL is to use a database of keywords and phrases that can be matched to a set of input, looking for matches to index ‘meaning’ from the text. But covering the entire panoply of insurance, life, health and property, the database of phrases is virtually infinite. Have you seen the series of ads from Farmers Insurance showing all the weird claims they’ve paid off? Having a set of key phrases so you can match a description of the damage incurred during an incident where a moose became violently romantic with your Subaru is a fairly specific expectation.

So, we agreed that we wanted something better than that. What we really wanted was software that would allow a computer to ‘understand’ text as thoroughly as it ‘understands’ numbers. Well, dealing with numbers is built into the processor itself, whereas the only thing the processor does with text is shove it around and compare it in various ways. Clearly, we were looking for something new.

At around this same time, I happened to re-read a short story by Robert Heinlein called Blowups Happen. There is a character in the story named Lentz, a Symbologist, who discusses the nature of thought, maintaining that human beings think in symbols, and in fact, can’t think without symbols and that symbols for higher concepts are themselves made up of simpler, less complex symbols.

I’ll give you an example: do you remember in the movie Ghostbusters, when the demon dog thing shows up in Rick Moranis’ apartment? He comes tearing out of the front door of the building, screaming that there is a bear in his apartment. The bystanders laugh, until the thing follows him out, at which point one of them describes it as a cougar.

Neither of those witnesses could accurately describe what they had seen because much like Donnie, they had no frame of reference… for a demon dog thing. They saw something of a particular size, moving like an animal, possibly heard a growl, and their brains filled in the appropriate details to produce whatever animal with which their own unique set of symbols was most comfortable.

It was only when the beast stopped moving that Rick Moranis could assimilate this as something new, and he said, “nice doggie”. In other words, once he could see it clearly, his brain could form a new set of symbols that accurately matched the reality in front of him. He saw the size, and the shape, and the physicality; he saw the horns and the glowing red eyes. And his brain created new symbols that encompassed all those details, and differences between this and other animals. You can assume that if he ever saw another demon dog thing throughout the rest of his life, he would immediately recognize it as such, because of the new symbols created here.

Putting the idea of human brains thinking in symbols together with the desired capabilities of this new tool, I realized what was truly needed. All I had to do was write a piece of software that could manipulate and aggregate symbols in the same way as a human brain. If I built that then not only would it be able to read these forms and ‘understand’ the information within, but would also be useful in an endless variety of other tasks.

I didn’t really consider it Artificial Intelligence, because I tend to use the science fiction standard on that, and I didn’t expect this to ever wake up and care about its own existence. But it would be able to do many things that currently require a human, including, by the way, reading handwriting.

Write software that can deal with symbols in the same manner as a human brain? That shouldn’t be too difficult.

(King Kong, remember?)

———————————————————————————————————————————————————

This is the end of part one out of some number larger than one. In the next part, I’ll discuss just what it is that a human brain can actually do, and how it gets there. As I complete additional parts, I’ll grab additional Group Writing days, planning to finish before the month is up.

The inspiration for this post came during a recent conversation started by @hankrhody, entitled “How to Automate a Job Out of Existence”. In the comment section, @dnewlander remarked that you can’t make a machine understand English, and I differed. He asked how, and I threatened to write a post about it someday.

So Derwood, this one’s for you.

Published in Group Writing
This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s growing community of conservatives and be part of the conversation. Get your first month free.

There are 37 comments.

  1. 1
  2. 2
  1. Member

    Judge Mental: So, Derwood, this one’s for you.

    Love it.

    • #1
    • February 5, 2019 at 11:53 pm
    • 4 likes
  2. Member

    I’ve spent a half century believing that Heinlein was onto something with his (let’s face it) naive but intriguing suggestion that bias in media could be detected numerically–x number of harsh negative adjectives, y numbers of passive voice avoidance of saying something positive. 

    This post could be the beginning of a truly terrific series. It already is. 

    • #2
    • February 5, 2019 at 11:53 pm
    • 7 likes
  3. Member
    Judge Mental Post author

    Gary McVey (View Comment):

    I’ve spent a half century believing that Heinlein was onto something with his (let’s face it) naive but intriguing suggestion that bias in media could be detected numerically–x number of harsh negative adjectives, y numbers of passive voice avoidance of saying something positive.

    This post could be the beginning of a truly terrific series. It already is.

    Thanks for the probably misplaced confidence.

    • #3
    • February 5, 2019 at 11:55 pm
    • 4 likes
  4. Member

    Arahant (View Comment):

    Judge Mental: So, Derwood, this one’s for you.

    Love it.

    I mostly liked it for that. I’ll have to read it properly later.

    • #4
    • February 5, 2019 at 11:58 pm
    • 3 likes
  5. Member
    Judge Mental Post author

    Matt Balzer, Imperialist Claw (View Comment):

    Arahant (View Comment):

    Judge Mental: So, Derwood, this one’s for you.

    Love it.

    I mostly liked it for that. I’ll have to read it properly later.

    That’s okay. I’m a like whore.

    • #5
    • February 6, 2019 at 12:01 am
    • 3 likes
  6. Member

    OK, as the movie guy, here’s what interests me. There have been plenty of SF/fantasy films with AI that equals human consciousness; 2001′s HAL may be the best known. When 2001 was written, circa 1964-66, full consciousness didn’t seem unobtainable in 35 years. It’s easy to write humanoid characters, like C3PO. 

    Next in line were entities who/that were conscious to a degree: The Forbin Project. “Mother” in Alien. Joshua in WarGames. They speak to us in our language and understand, but also dangerously misunderstand. 

    After the wildly unrealistic hopes of early AI were dashed, it took a while for the next wave to bubble up. From the layman’s standpoint it looks like there were two: expert systems, which constrained the ‘universe’ of knowledge and questions; and neural networks, perceptrons, which would ultimately prove to part of a mysteriously successful effort to conquer natural speech and handwriting. 

    Where we are today isn’t, to my knowledge, on any SF movie map: we’re still nowhere remotely near machine consciousness, but we do have natural language speech recognition and an ability to perform useful tasks and follow instructions. 

    Maybe Huey, Dewey, and Louie, the mute mini-robots of Silent Running, best depict the kind of AI we have today. 

    • #6
    • February 6, 2019 at 12:15 am
    • 7 likes
  7. Coolidge

    What a great and fascinating post! Wow!

     

    • #7
    • February 6, 2019 at 3:28 am
    • 1 like
  8. Member

    Pre-PMSC Cybertek? I was at SOLCORP.

    • #8
    • February 6, 2019 at 6:10 am
    • 2 likes
  9. Member
    Judge Mental Post author

    Brian Wolf (View Comment):

    What a great and fascinating post! Wow!

     

    Thank you, sir.

    • #9
    • February 6, 2019 at 6:49 am
    • Like
  10. Member
    Judge Mental Post author

    Danny Alexander (View Comment):

    Pre-PMSC Cybertek? I was at SOLCORP.

    See my PM.

    • #10
    • February 6, 2019 at 6:49 am
    • Like
  11. Thatcher

    Whoa, @judgemental! I love it when you talk bits-and-bytes and platforms…More, please?

    Serious question: How come voice-input software is so non-intuitive that one has to verbalize punctuation? Can’t grammar rules be plugged in like spelling? (That’s why I don’t use voice-to-text much; I type faster than it thinks.) :-)

    • #11
    • February 6, 2019 at 9:56 am
    • 4 likes
  12. Member

    Judge Mental:

    All I had to do was write a piece of software that could manipulate and aggregate symbols in the same way as a human brain. If I built that then not only would it be able to read these forms and ‘understand’ the information within, but would also be useful in an endless variety of other tasks.

    I didn’t really consider it Artificial Intelligence, because I tend to use the science fiction standard on that, and I didn’t expect this to ever wake up and care about its own existence.

    Interesting.

    The common assumption is that, were mankind to actually create something that’d function as AI, it’d rapidly become smarter and smarter as processor speeds and all increase by dint of Moore’s law.

    I’m not sure that this way of making AI is subject to the same sort of singularity. Which leaves room for interesting science fiction, in a realm, as Mr. McVey has noted, entirely unexplored.

    • #12
    • February 6, 2019 at 10:00 am
    • 3 likes
  13. Member

    “…I’d while away the hours, conversing with the flowers…”

    • #13
    • February 6, 2019 at 10:11 am
    • 5 likes
  14. Member
    Judge Mental Post author

    Nanda "Chaps" Panjan… (View Comment):

    Whoa, @judgemental! I love it when you talk bits-and-bytes and platforms…More, please?

    Serious question: How come voice-input software is so non-intuitive that one has to verbalize punctuation? Can’t grammar rules be plugged in like spelling? (That’s why I don’t use voice-to-text much; I type faster than it thinks.) :-)

    The serious answer is that I don’t know because I don’t how they’ve built it. Grammar rules can certainly be encoded and processed; Microsoft Word does that when they check for things like passive voice. If I had to guess, I would say the difference between the two is that Word has a complete sentence to work with, whereas the voice-to-text software is dealing with one word of input at a time. So, does that slight pause in your speech indicate a comma, or a period, or that you were momentarily distracted by something outside the window? I would suspect that they have tried to do it, but have gotten results that are haphazard enough that they decided it wasn’t worth doing. You don’t want your user to have to go back and edit everything they type. Kind of defeats the purpose.

    • #14
    • February 6, 2019 at 10:21 am
    • 5 likes
  15. Member
    Judge Mental Post author

    Hank Rhody, Meddling Cowpoke (View Comment):
    I’m not sure that this way of making AI is subject to the same sort of singularity.

    I don’t think it would be. This is, in effect, a giant comparison engine, which means something has to turn the crank for it to operate at all. Unless you go out of your way to create something that will cause it to turn the crank itself, it should be completely lazy and uninterested in doing any extra work.

    • #15
    • February 6, 2019 at 10:33 am
    • 3 likes
  16. Member

    Judge Mental (View Comment):
    it should [be] completely lazy and uninterested in doing any extra work.

    Now there’s a motivation I can respect.

    • #16
    • February 6, 2019 at 10:36 am
    • 6 likes
  17. Member
    Judge Mental Post author

    Hank Rhody, Meddling Cowpoke (View Comment):

    “…I’d while away the hours, conversing with the flowers…”

    Now I’m going to have that stuck in my head.

    • #17
    • February 6, 2019 at 10:36 am
    • 4 likes
  18. Member

    Hank Rhody, Meddling Cowpoke (View Comment):

    “…I’d while away the hours, conversing with the flowers…”

    Why do people keep saying that?

    • #18
    • February 6, 2019 at 11:10 am
    • 3 likes
  19. Member

    Matt Balzer, Imperialist Claw (View Comment):

    Hank Rhody, Meddling Cowpoke (View Comment):

    “…I’d while away the hours, conversing with the flowers…”

    Why do people keep saying that?

    • #19
    • February 6, 2019 at 11:25 am
    • 4 likes
  20. Member

    Well, I’m intrigued to see how this series plays out, but I have serious reservations about the chances for success.

    Especially in English.

    Especially with written text.

    There’s a lot of words in English. A lot. And we keep adding and inventing and borrowing them all the time. Keeping up with the symbology behind each of them is going to be daunting, even with vaunted machine learning. Then there’s the fact that English sentence construction is ridiculously complicated, especially in technical documents, with the grammar rules coming into play all the time in different ways–especially from people, both writers and readers, who don’t understand sentence structure and grammar very well. Then you have homonyms and homographs, for which you have to discern the symbology from context.

    People misread written text, like email, all the time. Even with the best of intentions. Because meaning is lost when things are written down because human languages are not nearly as precise as programmers would like to believe.

    I’ll put it this way:

    If you actually learned the lexicon of Google’s Search screen decently-well in, say, 2004, you could very precisely tune the results to get the results you wanted pretty darn well, using the + and – and ” and () modifiers. Sure, it was similar to teaching a monkey about Regular Expressions for most casual users. But if you actually learned it, it was handy and incredibly powerful.

    Since about 2015 or so, Google has abandoned that entire lexicon in favor of true Natural Language Processing which purports to know what you’re searching for (mostly based on comparing your search terms with similar searches and weighting the results based on what they actually clicked on next. (Oh, and by tweaking it based on what they know about your personal interests, so they can show you ads for the Toyota you bought last week, since that’s obviously an impulse buy for you.)

    As a programmer, do you think the 2019 Google search experience is objectively better in 2019 than it was in 2004? I don’t.

    Anyway, I look forward to the next part. :)

    • #20
    • February 6, 2019 at 12:20 pm
    • 5 likes
  21. Member
    Judge Mental Post author

    dnewlander (View Comment):

    Well, I’m intrigued to see how this series plays out, but I have serious reservations about the chances for success.

    Especially in English.

    Especially with written text.

    There’s a lot of words in English. A lot. And we keep adding and inventing and borrowing them all the time. Keeping up with the symbology behind each of them is going to be daunting, even with vaunted machine learning. Then there’s the fact that English sentence construction is ridiculously complicated, especially in technical documents, with the grammar rules coming into play all the time in different ways–especially from people, both writers and readers, who don’t understand sentence structure and grammar very well. Then you have homonyms and homographs, for which you have to discern the symbology from context.

    People misread written text, like email, all the time. Even with the best of intentions. Because meaning is lost when things are written down because human languages are not nearly as precise as programmers would like to believe.

     

    Since anyone who followed the link to our original comments has seen it, I’ll reinforce what I said before. The learning is the hard part. Compared to that, making comparisons to derive meaning is relatively easy. (Relatively easy in this case still meaning very, very difficult.)

    • #21
    • February 6, 2019 at 12:27 pm
    • 3 likes
  22. Member

    Judge Mental (View Comment):

    dnewlander (View Comment):

    Well, I’m intrigued to see how this series plays out, but I have serious reservations about the chances for success.

    Especially in English.

    Especially with written text.

    There’s a lot of words in English. A lot. And we keep adding and inventing and borrowing them all the time. Keeping up with the symbology behind each of them is going to be daunting, even with vaunted machine learning. Then there’s the fact that English sentence construction is ridiculously complicated, especially in technical documents, with the grammar rules coming into play all the time in different ways–especially from people, both writers and readers, who don’t understand sentence structure and grammar very well. Then you have homonyms and homographs, for which you have to discern the symbology from context.

    People misread written text, like email, all the time. Even with the best of intentions. Because meaning is lost when things are written down because human languages are not nearly as precise as programmers would like to believe.

     

    Since anyone who followed the link to our original comments has seen it, I’ll reinforce what I said before. The learning is the hard part. Compared to that, making comparisons to derive meaning is relatively easy. (Relatively easy in this case still meaning very, very difficult.)

    As I said, I’m interested in the further parts to come.

    I’m just reserving my right to be skeptical.

    • #22
    • February 6, 2019 at 12:30 pm
    • 1 like
  23. Member

    dnewlander (View Comment):
    As a programmer, do you think the 2019 Google search experience is objectively better in 2019 than it was in 2004? I don’t.

    I don’t want a system that anticipates what 95 percent of the people want, 95 percent of the time. I don’t even want a system that anticipates what I want 98.5 percent of the time. I want a system that understands what I want this time.

    • #23
    • February 6, 2019 at 5:49 pm
    • 4 likes
  24. Contributor

    Write software that can deal with symbols in the same manner as a human brain? That shouldn’t be too difficult.

    Famous first words . . . to be followed by “there is not such thing as a 5 minute task.”


    This conversation is part of our Group Writing Series under the February 2019 Theme Writing: How Do You Make That? There are plenty of dates still available. Tell us about anything from knitting a sweater to building a mega-structure. Share your proudest success or most memorable failure (how not to make that). Do you agree with Arahants’ General Theory of Creativity? “Mostly it was knowing a few techniques, having the right tools, and having a love for building and creating whatever it was.” Our schedule and sign-up sheet awaits.

    I will post March’s theme mid month.

    • #24
    • February 6, 2019 at 5:50 pm
    • 3 likes
  25. Member

    The Reticulator (View Comment):

    dnewlander (View Comment):
    As a programmer, do you think the 2019 Google search experience is objectively better in 2019 than it was in 2004? I don’t.

    I don’t want a system that anticipates what 95 percent of the people want, 95 percent of the time. I don’t even want a system that anticipates what I want 98.5 percent of the time. I want a system that understands what I want this time.

    Exactly.

    My company’s software works with a templating “language” called “Pebble”. Good luck finding that in the first 10,000 pages of Google search results these days. Even the “geeky” results are for the defunct smartwatch. Now that it’s on Github, you can find it with “Pebble template language”. But you couldn’t six months ago when the only reference to it was one guy’s website from Germany. That then went dark five months ago. And it still takes you forever to sort through the 10,000 other projects named “Pebble” on Github.

    • #25
    • February 6, 2019 at 5:53 pm
    • 3 likes
  26. Member

    The Reticulator (View Comment):

    dnewlander (View Comment):
    As a programmer, do you think the 2019 Google search experience is objectively better in 2019 than it was in 2004? I don’t.

    I don’t want a system that anticipates what 95 percent of the people want, 95 percent of the time. I don’t even want a system that anticipates what I want 98.5 percent of the time. I want a system that understands what I want this time.

    How is it going to know that when I don’t even know what I want?

    • #26
    • February 6, 2019 at 5:59 pm
    • 3 likes
  27. Member

    Matt Balzer, Imperialist Claw (View Comment):
    How is it going to know that when I don’t even know what I want?

    Or when I don’t use Google?

    • #27
    • February 6, 2019 at 6:07 pm
    • 2 likes
  28. Member

    Arahant (View Comment):

    Matt Balzer, Imperialist Claw (View Comment):
    How is it going to know that when I don’t even know what I want?

    Or when I don’t use Google?

    Well are we talking about Google’s system, or any system?

    • #28
    • February 6, 2019 at 6:08 pm
    • 1 like
  29. Member

    Matt Balzer, Imperialist Claw (View Comment):

    Arahant (View Comment):

    Matt Balzer, Imperialist Claw (View Comment):
    How is it going to know that when I don’t even know what I want?

    Or when I don’t use Google?

    Well are we talking about Google’s system, or any system?

    Well, it started on Giggle’s system. If we’ve migrated, I missed the notice.

    • #29
    • February 6, 2019 at 6:11 pm
    • 3 likes
  30. Member

    Arahant (View Comment):

    Matt Balzer, Imperialist Claw (View Comment):

    Arahant (View Comment):

    Matt Balzer, Imperialist Claw (View Comment):
    How is it going to know that when I don’t even know what I want?

    Or when I don’t use Google?

    Well are we talking about Google’s system, or any system?

    Well, it started on Giggle’s system. If we’ve migrated, I missed the notice.

    I use DDG, and it doesn’t suggest anything. I love it.

    • #30
    • February 6, 2019 at 6:33 pm
    • 4 likes
  1. 1
  2. 2