How to Build a Brain (Part 1) – The Challenge

 

How do you build a brain? How should I know? I’ve never built a brain. But I did spend a whole lot of time once thinking about how to do it.

In the mid-nineties, I was working for a software company in Dallas that did software for insurance administration. I was rolling off of the second project I had done there, starting my new job as Research Manager. This was technically a division level job, but my division actually consisted of me, and a part-time admin that I shared with the core Development group. My mandate was to explore various new technologies, in the expectation that at least some of what I did would prove useful and could be integrated into a future product. The projects that I had done are significant, because they had led me directly to the first request I got, and thus into my quest for a brain.

The first of them was an imaging project, designed to produce a paperless office. The idea was that when the mail came into the mailroom, it would be scanned to an image, the image would be indexed, and then the paper was shredded. Then, based on whether it was an application, a claim form, or other correspondence, it would be routed into the appropriate process. My project integrated this capability into all of the company’s existing desktop software systems.

One of the cool features was that when doing data entry into one of those systems, in addition to simply displaying the ‘paper’ form, it would synchronize to the image viewer, and magnify the portion of the form that contained the pertinent information. So, if the data entry screen was sitting on ‘Last Name’, then the portion of the form that contained the last name would be magnified. When the user tabbed to ‘First Name’, the magnified image would change as well.

The second project involved creating a programming language, to serve as a bridge between the components of a massive revamp of the company’s entire product line. I spent about two years total, building out a pile of specialized functionality, but the first version, with handling similar to the BASIC language, was done within 90 days. That included: a Compiler, that turned source code into platform-independent byte-code; a platform-dependent Virtual Machine Runtime module; and an Integrated Development Environment with all the bells and whistles, including an integrated debugger. (It was shockingly similar to the then-current version of Visual C++. Because I did an exact clone of that.)

After all that, I felt like King Kong on steroids.

Which is good, because the first thing the boss man wanted me to do was to go back to the imaging system, and figure out a way to get the data off the paper form and into the data entry form, without someone having to sit there and type it.

What… handwriting?

No, not handwriting, and in fact, since there were people in the business working on handwriting software we immediately decided to move past that and assume there was machine-readable text. No, he wanted me to figure out how to read the text and fill out the forms properly, regardless of the verbiage the people used, taking us into the realm of software known as Natural Language Processing.

Put simply, the typical approach in NPL is to use a database of keywords and phrases that can be matched to a set of input, looking for matches to index ‘meaning’ from the text. But covering the entire panoply of insurance, life, health and property, the database of phrases is virtually infinite. Have you seen the series of ads from Farmers Insurance showing all the weird claims they’ve paid off? Having a set of key phrases so you can match a description of the damage incurred during an incident where a moose became violently romantic with your Subaru is a fairly specific expectation.

So, we agreed that we wanted something better than that. What we really wanted was software that would allow a computer to ‘understand’ text as thoroughly as it ‘understands’ numbers. Well, dealing with numbers is built into the processor itself, whereas the only thing the processor does with text is shove it around and compare it in various ways. Clearly, we were looking for something new.

At around this same time, I happened to re-read a short story by Robert Heinlein called Blowups Happen. There is a character in the story named Lentz, a Symbologist, who discusses the nature of thought, maintaining that human beings think in symbols, and in fact, can’t think without symbols and that symbols for higher concepts are themselves made up of simpler, less complex symbols.

I’ll give you an example: do you remember in the movie Ghostbusters, when the demon dog thing shows up in Rick Moranis’ apartment? He comes tearing out of the front door of the building, screaming that there is a bear in his apartment. The bystanders laugh, until the thing follows him out, at which point one of them describes it as a cougar.

Neither of those witnesses could accurately describe what they had seen because much like Donnie, they had no frame of reference… for a demon dog thing. They saw something of a particular size, moving like an animal, possibly heard a growl, and their brains filled in the appropriate details to produce whatever animal with which their own unique set of symbols was most comfortable.

It was only when the beast stopped moving that Rick Moranis could assimilate this as something new, and he said, “nice doggie”. In other words, once he could see it clearly, his brain could form a new set of symbols that accurately matched the reality in front of him. He saw the size, and the shape, and the physicality; he saw the horns and the glowing red eyes. And his brain created new symbols that encompassed all those details, and differences between this and other animals. You can assume that if he ever saw another demon dog thing throughout the rest of his life, he would immediately recognize it as such, because of the new symbols created here.

Putting the idea of human brains thinking in symbols together with the desired capabilities of this new tool, I realized what was truly needed. All I had to do was write a piece of software that could manipulate and aggregate symbols in the same way as a human brain. If I built that then not only would it be able to read these forms and ‘understand’ the information within, but would also be useful in an endless variety of other tasks.

I didn’t really consider it Artificial Intelligence, because I tend to use the science fiction standard on that, and I didn’t expect this to ever wake up and care about its own existence. But it would be able to do many things that currently require a human, including, by the way, reading handwriting.

Write software that can deal with symbols in the same manner as a human brain? That shouldn’t be too difficult.

(King Kong, remember?)

———————————————————————————————————————————————————

This is the end of part one out of some number larger than one. In the next part, I’ll discuss just what it is that a human brain can actually do, and how it gets there. As I complete additional parts, I’ll grab additional Group Writing days, planning to finish before the month is up.

The inspiration for this post came during a recent conversation started by @hankrhody, entitled “How to Automate a Job Out of Existence”. In the comment section, @dnewlander remarked that you can’t make a machine understand English, and I differed. He asked how, and I threatened to write a post about it someday.

So Derwood, this one’s for you.

Published in Group Writing
This post was promoted to the Main Feed by a Ricochet Editor at the recommendation of Ricochet members. Like this post? Want to comment? Join Ricochet’s community of conservatives and be part of the conversation. Join Ricochet for Free.

There are 37 comments.

Become a member to join the conversation. Or sign in if you're already a member.
  1. Arahant Member
    Arahant
    @Arahant

    dnewlander (View Comment):
    I use DDG, and it doesn’t suggest anything. I love it.

    Same here, and yet it finds what I ask for.

    • #31
  2. dnewlander Inactive
    dnewlander
    @dnewlander

    Arahant (View Comment):

    dnewlander (View Comment):
    I use DDG, and it doesn’t suggest anything. I love it.

    Same here, and yet it finds what I ask for.

    I know. It’s like it’s magic!

    • #32
  3. The Reticulator Member
    The Reticulator
    @TheReticulator

    Matt Balzer, Imperialist Claw (View Comment):

    Arahant (View Comment):

    Matt Balzer, Imperialist Claw (View Comment):
    How is it going to know that when I don’t even know what I want?

    Or when I don’t use Google?

    Well are we talking about Google’s system, or any system?

    I’m also talking about the idea we sometimes hear from companies like Microsoft about how they want the user interface to anticipate what menus and menu items you need and push those out toward your eyeballs. I don’t like that. I’d rather their user interface had things in predictable places. Letting me do handy, short-term customizations would be OK, though. (I have fond recollections of the days when computers were programmable.)

    • #33
  4. dnewlander Inactive
    dnewlander
    @dnewlander

    The Reticulator (View Comment):

    Matt Balzer, Imperialist Claw (View Comment):

    Arahant (View Comment):

    Matt Balzer, Imperialist Claw (View Comment):
    How is it going to know that when I don’t even know what I want?

    Or when I don’t use Google?

    Well are we talking about Google’s system, or any system?

    I’m also talking about the idea we sometimes hear from companies like Microsoft about how they want the user interface to anticipate what menus and menu items you need and push those out toward your eyeballs. I don’t like that. I’d rather their user interface had things in predictable places. Letting me do handy, short-term customizations would be OK, though. (I have fond recollections of the days when computers were programmable.)

    What, you mean like Office used to be… two versions before the dreaded Ribbon?

    • #34
  5. Hank Rhody, Meddling Cowpoke Contributor
    Hank Rhody, Meddling Cowpoke
    @HankRhody

    Yeah, the local McDonalds these days has a flatscreen menu, with shifting prices to indicate small, medium, and large. Looks gorgeous and is almost unusable.

    • #35
  6. dnewlander Inactive
    dnewlander
    @dnewlander

    Hank Rhody, Meddling Cowpoke (View Comment):

    Yeah, the local McDonalds these days has a flatscreen menu, with shifting prices to indicate small, medium, and large. Looks gorgeous and is almost unusable.

    Mostly because a 40″ touchscreen is a terrible size for a touchscreen and should never make you scroll.

    They’ll fix that.

    • #36
  7. Arahant Member
    Arahant
    @Arahant

    dnewlander (View Comment):
    What, you mean like Office used to be… two versions before the dreaded Ribbon?

    Libre Office.

    • #37
Become a member to join the conversation. Or sign in if you're already a member.