Byte, abril del 85

Continuamos con el proyecto de leer mensualmente la revista Byte… de hace cuarenta años (tenéis las entradas anteriores en la etiqueta Byte del blog, aunque a ver si encuentro el momento de currarme un índice un poco más currado (que muy probablemente solo usaría yo, ciertamente)).

Decía el mes pasado que este número venía cargadito, y así es:

Portada  de la revista Byte de abril de 1985. El tema de portada es la inteligencia artificial. La ilustración es iuna mano humana dibujando una mano robótica junto a una mano robótica dibujando una mano humana

…pero no cargado de las cosas que suelo destacar, sino de una buena cantidad de artículos sobre IA. Pongo yo unos cuantos «plus ça change» en estas entradas, pero en esta ocasión todo el bloque central de la revista es un «plus ça change». Tanto que, del resto, solo me voy a quedar con la inocentada:

Foto de un accesorio para el Mac que es un afilador de cuchillos. El texto que acompaña a la foto (en inglés) es el siguiente:

Knife the Mac

Ennui Associates has announced MacKnifer,a hardware attachment that mounts on the side of your Macintosh and sharpens knives, scissors, lawn-mower blades – anything that needs sharpening. With MacKnifer's patented double-action grinding wheel, you can easily sharpen any utensil in less time than it takes the Mac to open a file. According to the manufacturer, MacKnifer is so easy to use that you can opearte it within 30 minutes of taking it out of the box. Turn your spare computing time into extra cash with a knife-sharpening business on the side... of your Macintosh.

For more information on MacKnifer, contact Ennui Associates, 52502 Marginal Avenue, Somnolencia, CA, 90541.

Aquí la entrada correspondiente de hoaxes.org: https://hoaxes.org/af_database/permalink/the_macknifer, por si a alguien le hiciese falta.

(Por cierto, he decidido cambiar de «proveedor» para las revistas, a esta página de archive.org, y en la medida de lo posible (léase, cuando me acuerde) intentaré enlazar los artículos y piezas que comente.)

Entrando en materia, la cosa comienza con nada más y nada menos que Marvin Minsky:

COMMUNICATION WITH ALIEN INTELLIGENCE
by Marvin Minsky

It may not be as difficult as you think

WHEN FIRST WE MEET those aliens in outer space, will we and they be able to converse? I believe that, yes, we will— provided they are motivated to cooperate— because we'll both think in similar ways. I propose two kinds of arguments for why those aliens may think like us, in spite of having very different origins. These arguments are based on the idea that all intelligent problem solvers are subject to the same ultimate constraints – limitations on space, time, and materials. For animals to evolve powerful ways to deal with such constraints, they must have ways to represent the situations they face, and they must have processes for manipulating those representations. These two requirements are:

Economics: Every intelligence must develop symbol systems for representing things, causes, and goals, and for formulating and remembering the procedures it develops for achieving those goals.

Sparseness: Every evolving intelligence will eventually encounter certain very special ideas— e.g., about

arithmetic, causal reasoning, and economics— because these particular ideas are very much simpler than other ideas with similar uses.

The economics argument is that the power of a mind depends on how it manages the resources it can use. The concept of thing is indispensable for managing the resources of space and the substances that fill it. The concept of goal is indispensable for managing how we use the time we have available—both for what we do and what we think about. Aliens will use these notions too, because they are both easy to evolve and because there appear to be no easily evolved alternatives for them.

The sparseness theory tries to make this more precise by showing that almost any evolutionary search will soon find certain schemes that have no easily accessible alternatives, that is, other different ideas that can serve the same purposes. These ideas or processes seem to be peculiarly isolated in the sense that the only things that resemble them are vastly more complicated. I will discuss only

the specific example of arithmetic and conjecture that those other concepts of objects, causes, and goals have this same island-like character.

Critic: What if those aliens have evolved so far beyond us that their concerns are unintelligible to us and their technologies and conceptions have become entirely different from ours?

Then communication may be infeasible. My arguments apply only to those stages of mental evolution in...

Artificial-intelligence pioneer Marvin Minsky is Donner Professor of Science in the Department of Electrical Engineering and Computer Science at Massachusetts \nstitute of Technology (545 Technology Square, Cambridge, MA 02139). Ik the late 1950s, Minsky, together with John McCarthy [now at Stanford), created MIT's AI laboratory, of which Minsky was the director for several years. Minsky has long been interested in SETI [the Search for Extraterrestrial Intelligence) and participated in the important 1971 conference on communication with extraterrestrials, held in Soviet Armenia and organized by Carl Sagan.

Minsky fue cofundador del laboratorio de IA del MIT, había recibido el premio Turing en 1969, inventó el primer «head mounted display», codiseñó con Seymour Papert la tortuga de Logo, y para su tesis doctoral construyó a principios de los años cincuenta SNARC, uno de los primeros intentos de construir una máquina que imitara el comportamiento del cerebro humano, diseñada para simular una red neuronal, específicamente un conjunto de neuronas artificiales interconectadas, que emulaba el comportamiento de ratas recorriendo laberintos, y aprendía gradualmente a encontrar el camino correcto basándose en recompensas (lo que ahora llamamos aprendizaje por refuerzo). Ojo: Minsky (fallecido en 2016) estuvo asociado con Jeffrey Epstein y estuvo en su isla privada, aunque la mujer de Minsky, que estuvo allí con él, defiende que nunca hizo nada moralmente cuestionable allí.

Minsky, que estaba muy interesado en SETI, el proyecto para buscar vida extraterrestre, plantea en el artículo su hipótesis de que toda inteligencia, alienígena o no, debe ser similar y que, por tanto, no debería ser muy difícil la comunicación, a no ser que la otra inteligencia haya ido más allá del estado de preocuparse por su supervivencia, la comunicación y expandir su control del mundo físico. Para ello se apoya en un experimento mental de exploración de máquinas de Turing, y en la universalidad de la aritmética, para acabar llegando a la inevitabilidad, a su vez, de muchos aspectos del lenguaje (el razonamiento me suena a Chomsky, por algún motivo). No me atrevo para nada a resumir ni a juzgar el artículo, pero es curioso combinar la IA de la inteligencia artificial con la IA de la inteligencia alienígena, cuando menos…

THE QUEST TO UNDERSTAND THINKING

Roger Schank and Larry Hunter

It begins not with complex issues but with the most trivial of processes

ARTIFICIAL INTELLIGENCE, or AI, takes as its subject matter some of the most daunting questions of our existence. What is the nature of mind? What are we doing when we are thinking, feeling, seeing, or understanding? Is it possible to comprehend how our minds really work? These questions have been asked for thousands of years, but we've made little tangible progress at answering them.

AI offers a new tool for those pursuing the quest: the computer. As anyone who has used one can attest, computers often create more problems than they solve. But for probing the issues of mind and thought, that is just what we need.

The fundamental use of computers in helping us understand cognition is to provide a testbed for our ideas about what the mind does. Theories of mind often take the form of process descriptions. For example, a theory of question answering might claim that people first translate a question into an internal representation, use that representation as an index into memory, translate the recalled memory into an appropriate form for an answer, and then generate the words to communicate it. (This example is offered not as a real theory of question answering but as an example of what a process theory of mind might look like.)

Process theories seem to be a good way of describing what might go on inside the brain. One problem with them, however, is that all too often what looks like a good description really isn't specific enough to make the theory clear. "Use the representation as an index into memory" isn't a good explanation of the processes behind remembering a fact. How are facts recalled? How is the memory organized? What happens when memory gets very large? What if a fact isn't directly encoded in memory but can be inferred from something that is? A researcher trying to write a program that embodies the above simplistic theory would run into all of these problems and more. That's why we need to write programs. Programming forces us to be explicit, and being explicit forces us to confront the problems with our theories.

Not long ago, AI researchers like ourselves focused on what they considered to be manifestations of highly intelligent behavior; playing chess, proving mathematical theorems, solving complex logical puzzles, and the like. Many AI researchers devoted a lot of energy to these projects and found powerful computational techniques for accomplishing such "intelligent" tasks. But we discovered that the techniques we developed are not the same ones that people actually use to perform these tasks, and we have instead begun to concentrate on tasks that almost any adult finds trivial: using language, showing common sense, learning from past experiences.

Language

We began studying these "trivial" tasks by trying to write programs that...

El siguiente artículo también tiene autores «wikipediables»: Roger Schank se doctoró en lingüística después de graduarse en matemáticas, fue profesor de informática y psicología en Yale , donde en 1981 fundó el Yale Artificial Intelligence Project y en 1989 haría lo mismo con el Institute for the Learning Sciences de Northwestern. Investigaba sobre comprensión del lenguaje natural y razonamiento basado en casos. Y, me temo, no solo conocía también a Epstein (ayuda que este se dedicase de vez en cuando a financiar investigación en IA), como Minsky, sino que le mostró su apoyo cuando comenzó a destaparse el pastel :-S. Lawrence Hunter, por su parte, se dedica hoy en día a la biología computacional, campo al que llegó a través del razonamiento basado en casos para el diagnóstico del cáncer de pulmón.

¿Y el artículo? El artículo toca, primero, un tema que se me antoja vital y, a la vez, absolutamente ausente del debate actual: cómo la inteligencia artificial podría ser una muy buena herramienta para ayudar a entender qué es y cómo funciona la inteligencia «natural», y luego se centra en algunos de los problemas de procesar el lenguaje natural, como la ambigüedad, el contexto o la memoria (la de recordar, no necesariamente la RAM).

Me estoy pasando con la cuenta de palabras, o sea que solo citaré The LISP tutor y PROUST, An automatic debugger for Pascal programs, que como podrá imaginar el lector, se centran en los usos , que ahora parecen más cercanos, pero ya veremos, de la IA para enseñar a programar y ayudarnos a programar.

Y cerramos con…

LEARNING IN PARALLEL NETWORKS

Simulating learning in a probabilistic system

THE BRAIN is an incredibly powerful computer. The cortex alone contains over 10^10 neurons, each connected to thousands of others. All of your knowledge is probably stored in the strengths of these connections, which somehow give you the effortless ability to understand English, to make sensible plans, to recall relevant facts from fragmentary cues, and to interpret the patterns of light and dark on the back of your eyeballs as real three-dimensional scenes. By comparison, modern computers do these things very slowly, if at all. They appear very smart when multiplying long numbers or storing millions of arbitrary facts, but they are remarkably bad at doing what any five-year-old can.

One possible explanation is that we don't program computers suitably. We are just so ignorant about what it takes to understand English or interpret visual images that we don't know the appropriate data structures and procedures to put into the machine. This is what most people who study artificial intelligence (AI) believe, and over the last 20 years they have made a great deal of progress in reducing our ignorance in these areas.

Another possible explanation is that brains and computers work differently. Perhaps brains have evolved to be very good at a particular style of computation that is necessary in everyday life but hard to program on a conventional computer. Perhaps the fact that brains store knowledge as connection strengths makes them particularly adept at weighing many conflicting and cooperating considerations very rapidly to arrive at a common-sense judgment or interpretation. Of course, any style of computation whatsoever can be simulated by a digital computer, but when one kind of machine simulates a very different kind it can be very slow. To simulate all the neurons in a human brain in real time would take thousands of large computers. To simulate all the arithmetic operations occurring in a Cray would take billions of people.

It is easy to speculate that the brain uses quite different computational principles, but it is hard to discover what those principles are. Empirical studies of the behavior of single

neurons and their patterns of connectivity have revealed many interesting facts, but the underlying computational principles are still unclear. We don't know, for example, how the brain represents complex ideas, how it searches for good matches between stored models of objects and the incoming sensory data, or how it learns. In this issue Jerome A. Feldman describes some current ideas about how parallel networks could recognize objects (see "Connections" on page 277). I will describe one old and one new theory of how learning could occur in these brain-like networks. Please remember that these theories are extreme idealizations; the real brain is much more complicated.

Associating Inputs with Outputs

Imagine a black box that has a set of input terminals and a set of output

… nada más y nada menos que un Nobel de física (y premio Turing, y premio Príncipe de Asturias, y no sé cuántos premios más), Geoffrey Hinton. Lo de darle un Nobel en física a un graduado en física y doctor e investigador en IA es algo en lo que no entraré ahora, pero marcarse el punto de publicarle cuando era un mero profesor ayudante en Carnegie Mellon, junto a figuras al menos aparentemente de mucho más relumbrón que él, me lo vais a tener que reconocer, no está nada mal. Más si lo que está explicando es, si no lo he entendido mal, el trabajo en entrenamiento de redes neuronales que es uno de los pilares por los que ha acabado ganando todo el reconocimiento y los premios con los que cuenta.

Y no me alargo más, pero toda la tabla de contenidos del especial merece como mínimo una ojeada rápida…

Y, en cualquier caso, que cuarenta años no son nada.

Si queréis seguir leyendo, aquí tenéis mis notas sobre el número de marzo. Y el mes que viene, con un poco de suerte, más.

Comentarios

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *