Ricochet is the best place on the internet to discuss the issues of the day, either through commenting on posts or writing your own for our active and dynamic community in a fully moderated environment. In addition, the Ricochet Audio Network offers over 50 original podcasts with new episodes released every day.
How do you build a brain? How should I know? I’ve never built a brain. But I did spend a whole lot of time once thinking about how to do it.
In the mid-nineties, I was working for a software company in Dallas that did software for insurance administration. I was rolling off of the second project I had done there, starting my new job as Research Manager. This was technically a division level job, but my division actually consisted of me, and a part-time admin that I shared with the core Development group. My mandate was to explore various new technologies, in the expectation that at least some of what I did would prove useful and could be integrated into a future product. The projects that I had done are significant, because they had led me directly to the first request I got, and thus into my quest for a brain.
The first of them was an imaging project, designed to produce a paperless office. The idea was that when the mail came into the mailroom, it would be scanned to an image, the image would be indexed, and then the paper was shredded. Then, based on whether it was an application, a claim form, or other correspondence, it would be routed into the appropriate process. My project integrated this capability into all of the company’s existing desktop software systems.
One of the cool features was that when doing data entry into one of those systems, in addition to simply displaying the ‘paper’ form, it would synchronize to the image viewer, and magnify the portion of the form that contained the pertinent information. So, if the data entry screen was sitting on ‘Last Name’, then the portion of the form that contained the last name would be magnified. When the user tabbed to ‘First Name’, the magnified image would change as well.
The second project involved creating a programming language, to serve as a bridge between the components of a massive revamp of the company’s entire product line. I spent about two years total, building out a pile of specialized functionality, but the first version, with handling similar to the BASIC language, was done within 90 days. That included: a Compiler, that turned source code into platform-independent byte-code; a platform-dependent Virtual Machine Runtime module; and an Integrated Development Environment with all the bells and whistles, including an integrated debugger. (It was shockingly similar to the then-current version of Visual C++. Because I did an exact clone of that.)
After all that, I felt like King Kong on steroids.
Which is good, because the first thing the boss man wanted me to do was to go back to the imaging system, and figure out a way to get the data off the paper form and into the data entry form, without someone having to sit there and type it.
No, not handwriting, and in fact, since there were people in the business working on handwriting software we immediately decided to move past that and assume there was machine-readable text. No, he wanted me to figure out how to read the text and fill out the forms properly, regardless of the verbiage the people used, taking us into the realm of software known as Natural Language Processing.
Put simply, the typical approach in NPL is to use a database of keywords and phrases that can be matched to a set of input, looking for matches to index ‘meaning’ from the text. But covering the entire panoply of insurance, life, health and property, the database of phrases is virtually infinite. Have you seen the series of ads from Farmers Insurance showing all the weird claims they’ve paid off? Having a set of key phrases so you can match a description of the damage incurred during an incident where a moose became violently romantic with your Subaru is a fairly specific expectation.
So, we agreed that we wanted something better than that. What we really wanted was software that would allow a computer to ‘understand’ text as thoroughly as it ‘understands’ numbers. Well, dealing with numbers is built into the processor itself, whereas the only thing the processor does with text is shove it around and compare it in various ways. Clearly, we were looking for something new.
At around this same time, I happened to re-read a short story by Robert Heinlein called Blowups Happen. There is a character in the story named Lentz, a Symbologist, who discusses the nature of thought, maintaining that human beings think in symbols, and in fact, can’t think without symbols and that symbols for higher concepts are themselves made up of simpler, less complex symbols.
I’ll give you an example: do you remember in the movie Ghostbusters, when the demon dog thing shows up in Rick Moranis’ apartment? He comes tearing out of the front door of the building, screaming that there is a bear in his apartment. The bystanders laugh, until the thing follows him out, at which point one of them describes it as a cougar.
Neither of those witnesses could accurately describe what they had seen because much like Donnie, they had no frame of reference… for a demon dog thing. They saw something of a particular size, moving like an animal, possibly heard a growl, and their brains filled in the appropriate details to produce whatever animal with which their own unique set of symbols was most comfortable.
It was only when the beast stopped moving that Rick Moranis could assimilate this as something new, and he said, “nice doggie”. In other words, once he could see it clearly, his brain could form a new set of symbols that accurately matched the reality in front of him. He saw the size, and the shape, and the physicality; he saw the horns and the glowing red eyes. And his brain created new symbols that encompassed all those details, and differences between this and other animals. You can assume that if he ever saw another demon dog thing throughout the rest of his life, he would immediately recognize it as such, because of the new symbols created here.
Putting the idea of human brains thinking in symbols together with the desired capabilities of this new tool, I realized what was truly needed. All I had to do was write a piece of software that could manipulate and aggregate symbols in the same way as a human brain. If I built that then not only would it be able to read these forms and ‘understand’ the information within, but would also be useful in an endless variety of other tasks.
I didn’t really consider it Artificial Intelligence, because I tend to use the science fiction standard on that, and I didn’t expect this to ever wake up and care about its own existence. But it would be able to do many things that currently require a human, including, by the way, reading handwriting.
Write software that can deal with symbols in the same manner as a human brain? That shouldn’t be too difficult.
(King Kong, remember?)
This is the end of part one out of some number larger than one. In the next part, I’ll discuss just what it is that a human brain can actually do, and how it gets there. As I complete additional parts, I’ll grab additional Group Writing days, planning to finish before the month is up.
The inspiration for this post came during a recent conversation started by @hankrhody, entitled “How to Automate a Job Out of Existence”. In the comment section, @dnewlander remarked that you can’t make a machine understand English, and I differed. He asked how, and I threatened to write a post about it someday.
So Derwood, this one’s for you.