Byte, julio del 85

Seguimos con nuestro proyecto de hojear, con un delay de cuarenta años, la revista Byte. Este mes el tema de portada es el espacio y la informática y no vamos a encontrar joyas especialmente interesantes… (El número de agosto será mucho más interesante, puedo prometerlo.)

La portada de la revista Byte de julio de 1985. La imagen de portada en un enorme disquete de cinco pulgada y cuarto. El tema de la revista es Ordenadores y espacio

En las noticias tenemos que comienzan a llegar los sistemas con procesadores 286 (en el número de mayo habían comentado la llegada del IBM PC AT):

New 80286 Systems Flood COMDEX

Late spring saw the introduction of many new IBM PC AT-compatible computers. By mid-May, new 80286-based systems had been announced by Kaypro, ITT, Compaq, TeleVideo, Corona, Texas Instruments, Zenith, NCR, Tomcat, and Basic Time. Another multiuser AT compatible computer, available from MAD Computer in both floor and desktop models, will be sold only to other manufacturers. Wang also disclosed that it is developing an AT-compatible system.

Intertec, West Columbia, SC, has redesigned its HeadStart computer, replacing its 8086 processor with an 80286 and eliminating its 3½ inch disk drive. The HeadStart ATS's standard 256K bytes of RAM can be expanded to 3 megabytes; the computer also includes serial, parallel, and network interfaces. The basic HeadStart ATS is priced at $1895 without disk drives. A dual 5 '/4-inch disk-drive add-on unit is $495 extra. Intertec also announced several 80186-based file servers for its MultiLAN proprietary polling network; a $695 interface card also allows IBM PCs to be attached to the network.

Mientras quede «contenido Commodore», lo seguiremos recogiendo por aquí, esto es un hecho:

Anuncio a doble página del Commodore 128 en el que explica todas las cosas que habría que añdir a un Apple IIc para conseguir la potencia de un Commodore 128, como las capacidades de sonido.

¿No os recuerda un poco al dichoso anuncio de hace unos meses de Apple en que aplastaban todo tipo de herramientas creativas? Igual es que Apple estaba recuperando el trauma de este anuncio anti-Apple de hace apenas cuatro décadas…

En la sección de libros nos encontramos otro tema recurrente: la accesibilidad.

PERSONAL COMPUTERS AND SPECIAL NEEDS 
Reviewed by John Wilke

In 1977, a group of activists with a variety of disabilities staged a symbolic sit-in at the Department of Health, Education, and Welfare to demonstrate support for a bill frequently called "the civil rights act for the disabled."

Since that legislation became law, engineers and city planners must design public buildings that are accessible to all people. The young man who led the HEW demonstration and lobbied successfully for the new law has turned his attention to overcoming another set of barriers: software, computers, and communications equipment that, by design, shut out the disabled.

Frank G. Bowe is quick to point out in Personal Computers and Special Needs that just as new technology is beginning to make it possible for disabled individuals to not only communicate more effectively but also pursue meaningful employment in the information industry, there is a lack of physically compatible and affordable computer interfaces. This paradox is an underlying theme in Bowe's book, a survey of personal computer peripherals and communications prostheses available to people whose hearing or vision is impaired or who are unable to manage normal movement.

Bowe takes what might have been little more than a listing of the latest in speech synthesizers and keyboard emulators and peoples it with firsthand accounts of how the devices are making life more productive for disabled people. Unifying this effort is his concern that with the transition to an increasingly information-based economy— with its obvious promise of fuller participation for the disabled—the danger remains that a new set of barriers will prevent them from participating.

The book, then, addresses both how-to and why. It was written first for the nearly 30 million Americans who might...

El comentario de siempre: hace más de cuarenta años que unos cuantos se interesan por el potencial de las tecnologías digitales para ponerlo todo al alcance de las personas con discapacidad… y el resto que no le presta ni la más mínima atención al tema. El libro, (Personal Computers and Special Needs) por cierto, está disponible en archive.org.

En el número de marzo habíamos hablado de pantallas planas que usaban paneles de plasma y pantallas electroluminiscentes, las tecnologías que hoy sabemos que no iban a tener éxito. Ahora llegamos a los LCDs:

Liquid-Crystal Displays for Portables

Inside the display technology that has made portable computers portable

Several months ago I got into a discussion with a computer enthusiast about which portable computer to buy. I quickly whipped out my portable and began preaching its merits and demonstrating how powerful it is. I could see the display perfectly, but the fellow standing next to me was having difficulty reading what I had typed. Poor display quality is a common limitation in portable computers. Most portables (not to be confused with transportables) have twisted-nematic liquid-crystal displays (TNLCDs), with restricted viewing angles and limited contrast. They must be operated under proper ambient lighting conditions.

In mid-1982, there were only a few low-profile displays on the market. Of the available technologies, TN-LCD was the only one that had acceptable power requirements for battery operation. A typical 16-line LCD module dissipates approximately % watt (W). Other available flat-panel technologies...

(Por cierto, si os atrevéis a seguir el enlace (cada escaneado está enlazado a la página correspondiente de la revista en Archive), sabed que os vais a encontrar una discusión bastante técnica de la tecnología LCD.)

Y llegamos al tema de portada:

COMET LINES IN FORTRAN

by David S. Dixon

The program described calculates the positions of asteroids and comets

THE PROGRAM DISCUSSED in this article is intended to allow amateur astronomers to calculate the positions of asteroids or comets with greater accuracy than the programs previously published in general literature. Written in FORTRAN IV, the program should be translatable to any BASIC that supports double-precision calculation. But be advised that this is a number-crunching program: it may run for hours if rewritten in interpreted BASIC

Asteroids are a very challenging target for the observer: they appear as points of light just like the stars. Depending on the asteroid's position relative to earth, it may or may not demonstrate detectable motion against the background stars. Frequently, several nights of observation are required to see displacement and identify the asteroid. Successfully hunting a particular asteroid usually means having a good idea of the asteroid's position at the intended time of observation and having a good set of star charts.

The problem is that accurate tables of locations for asteroids, known as ephemerides, are not easy to come by. The United States Naval Obser-

vatory publishes ephemerides for the four major asteroids in The Astronomical Almanac each year, but there are thousands of named asteroids. (For a list of books and periodicals mentioned in this and other articles, see the 'Astronomy Sources" text box on page 244.) The Soviet Union's Institute of Theoretical Astronomy publishes the Ephemerides of Minor Planets, which gives ephemerides for thousands of asteroids, but only for a few weeks at opposition, and it is a difficult publication to obtain. Both the Russian and the Naval Observatory publications, however, also give the orbital elements for a large number of asteroids, and with the elements it is possible to calculate the ephemerides of an asteroid yourself.

Many of the books and magazine articles that address calculating the position of a planet solve the problem by the model devised by Johannes Kepler in 1609. The method models the motion of a body in the solar system as involving only the sun and the body in question. This means that to find the relative positions of Earth and Mars in a common coordinate system you solve the two-body sun-Mars problem, solve the two-body sun-

Earth problem, and, using spherical trigonometry, combine the two results to solve the Earth-Mars problem. The method can produce results satisfactory for use in finding planets, but the accuracy for use on asteroids is frequently inadequate. Kepler's model is a remarkable achievement since he derived it by geometry as an empirical solution based on position measurements made by lycho Brahe. Kepler's model is summarized in his first two laws:

First law: The orbit of each planet is an ellipse, with the sun at one of the two foci.

Second haw: The line joining the planet to the sun sweeps over equal areas of the ellipse in equal intervals of time.

It was not until more than 50 years after Kepler's work was published that the work of Sir Isaac Newton explained the process that Kepler's model described and how the model was incomplete. Newton's law of gravi

Imagino que no sorprenderá a ningún lector habitual de obm encontrarse a un ingeniero de la NASA describiendo un programa en FORTRAN IV (yo hice mis pinitos con FORTRAN 77 unos pocos años más tarde) para trazar órbitas de asteroides y cometas. Lo mismito que ahora.

FORTRAN 77, por cierto, se lanzó en 1978, el IV es de 1961, y no me tiréis de la lengua con lo de la evolución del software y el uso de un lenguaje de más de veinte años de edad en el artículo. (Rascando un poco por la Wikipedia, compruebo que Fortran (perdió las mayúsculas en la versión 90 que, como adivinaréis, es de 1991) está ahora mismo el 12 en el TIOBE, un ranquin de uso de lenguajes de programación.)

Unas páginas más allá encontraréis un programa de seguimiento de satélites, y a continuación otro artículo sobre el control de telescopios. Y en la página 265, la crítica de una aplicación para seguir el cometa Halley, que pasaría por el sistema solar en 1986 (se le espera de nuevo a principios de 2061).

Y aprovecho que el número tenía poca «chicha» para detenerme en una de las columnas más míticas del periodismo sobre informática, el Computing at Chaos Manor de Jerry Pournelle.

Pournelle, fallecido en 2017, se dedicaba a la investigación operativa (con finalidades militares, parece ser), a escribir ciencia ficción (con un par de best sellers escritos a cuatro manos con Larry Niven), y a ser el power user de los power users con su columna, que apareció en la revista desde 1980 hasta 2006. Y en la columna narraba sus aventuras y desventuras con su extensa colección doméstica de ordenadores, que yo me leía (debía entender la cuarta parte, con suerte) con fascinación.

Quien dice «columna» dice «minirevista»: la de este número arrancaba en la página 309 y se iba hasta la 338 (con mucha publicidad de por medio, sí, pero vaya, que es un atracón), seguida de una página de correo de los lectores específica de la sección.

¿El contenido? Comenzamos con una visita a una feria informática que incluyó comer con Niklaus Wirth (ojo a la broma que se le atribuye sobre la pronunciación de su nombre que incluye la pieza). Seguimos con una discusión sobre si el futuro era de Intel y sus 286 y familia o de Motorola y sus 680×0, que se enlaza con una batallita sobre compiladores de Modula-2 (lenguaje creado por… Niklaus Wirth). Y si Wirth no fuese suficiente, luego tenemos una feria sobre el Mac en la que cenó con Frank Herbert. Sí. Ese Frank Herbert (que fallecería en 1986, por cierto). Pournelle iba a comprar un segundo Mac con la intención de ampliarlo a un mega de RAM (la mitad de la cual, dedicada a disco en memoria, que el Mac no podía direccionar más de 512 KBs), por apenas 1500 dólares.No descartamos que en el futuro se cuele alguna batallita «Chaos Manor» más.

En fin. Lo dejamos aquí (ya decía que no fue un número especialmente interesante, el de julio del 85) y recuperamos el mes que viene, mucho más interesante, al menos para mí.

Si alguien quiere entrar más a fondo, aquí está el número de julio entero, y aquí el archivo completo de la revista en Archive.

Byte, junio del 85

Pues nada, seguimos con nuestro proyecto de leernos cada mes la revista Byte… de hace cuarenta años. Y le toca el turno al número de junio de 1985. Encontraréis todos los números en esta colección de archive.org, y el que leemos hoy, en concreto, dedicado a las técnicas de programación.

Portada de la revista Byte de junio de 1985. La imagen de portada es un cubo de Rubik en que cada una de las pequeñas caras del cubo es un una palabre clave informática. Entre ellas, tenemos char, DUP, UNIT, CAT, REM, puts, NULL, GREP, COND, FOR, GOTO...

Tampoco es que sea el número de mi vida, pero tiene sus cosillas. La primera en que me paro tiene que ver con accesibilidad:

Products Will Aid Visually Disabled Computer Users

Computer Aids, Fort Wayne, IN, introduced several microcomputer products for the disabled. One product, Small-Talk, uses a modified Epson HX-20 and a speech synthesizer to allow blind users to perform word-processing tasks. With a printer, microcassette tape drive, and special word-processing software, the computer will cost about $2000.

Que sí. Que hace décadas que hay gente que piensa en usar la informática para ayudar a las personas con discapacidad (en este caso visual). Lástima que tanta otra gente se olvide del tema.

Siguiente parada, anuncio de ordenador de esos que te hace añorar el diseño industrial ochentero:

Anuncio a doble página de la marca apricot. En la página izquierda, los restos de un albaricoque  que alguien se acaba de comer, con el texto Past. A la derecha, un albaricoque entero, con el texto Present and Future. En la esquina inferior izquierda de la página, una miniatura de un ordenador de diseño ochentero. Más detalles en la siguiente imagen.

¿Me vais a decir que no es precioso? Bueno. Me vais a decir que no se ve. Hagamos un enhance it:

Zoom de la imagen anterior. Tenemos un portátil en tres piezas. La primera tiene la pantalla (de fósforo verde) y seguramente la CPU. El teclado está en otra pieza, y hay una tercera pieza con lo que parece ser un trackball. La imagen viene con el texto The APricot Portable. 521K RAM, 720K diskette. 80x25 line LCD. MS-DOS. $2495

¿Es o no precioso el Apricot Portable? Había salido a la venta en octubre del 84 y, recomonozcámoslo, le daba sopas con honda a los portátiles de la época (incluido mi adorado SX-64). Ni siete kilos, pesaba. Y las piezas separadas se comunicaban ¡por infrarrojos! ¡El futuro! ¡En 1984! Hasta tenía reconocimiento de voz (aunque habría que poner «reconocimiento» entre toneladas de comillas: dice la Wikipedia que se le podían entrar 4096 palabras, 64 de las cuales simultáneamente en memoria). Y su MS-DOS pasaba de los famosos 640 Ks (para llegar a 768, tampoco nos emocionemos más de la cuenta). En cualquier caso, una preciosidad.

Seguimos avanzando y nos encontramos con otro anuncio:

Anuncio. Vemos la foto de una pantalla de un PC de IBM, con lo que parece una interfaz gráfica del estilo de un Windows antiguo y un programa de dibujo en el que alguien ha creado lo que parece un post it con la palabra hi escrita a mano

¿Qué es eso de GEM? Aquí, otra versión del anuncio:

De nuevo, una pantalla de un IBM PC, con un entorno gráfico de ventanas (también se ve un ratón en la página) y la caja de un software, GEM Desktop, y su precio ($49.95). Se explica que el software era, efectivamente, un entorno gráfico para usar los PCs de IBM sin tener que teclear comandos crípticos.

GEM era el entorno gráfico que desarrolló Digital Research (la compañía de CP/M, fundada en 1974 y que sería adquirida por Novell en 1991 ) principalmente para los Atari ST, pero también para PCs con MS-DOS, entre otros. Y es ver una captura de GEM y que se me caiga la lagrimita. Esnif.

Volviendo a nuestro clásico «¿créias que esto era nuevo?», hoy toca…

Turning a common AI operation into silicon

Logic programming is a staple of artificial-intelligence (AI) software and is often dominated by the pattern-matching process of unification (see the "Resolution and Unification" text box on page 173). In fact, when logic-programming languages such as Prolog and LOGLISP are used, as much as 50 to 60 percent of a computer's processing time is spent on unification. When a single algorithm is used that frequently, it is natural to consider implementing it as custom hardware. When that same algorithm lends itself to parallelism and concurrency because of its recursive, treesearch characteristics, it practically begs for VLSI (very large scale integration) implementation.

SUM History

Professor lohn Oldfield and a team of researchers at Syracuse University are developing the SUM (Syracuse Unification Machine), a coprocessor for computers geared toward AI programming. The project combines the resources of the Syracuse CIS (Computer and Information Science) department. ECE (Electrical and Computer Engineering) department, and the CASE Center (Computer Applications and Software Engineering Center, set up by New York State). Key SUM individuals are Dr. Oldfield himself (who contributed CAD [com-

puter-aided design| and VLSI expertise). Professor Alan Robinson (who is the head of the logic-programming efforts at Syracuse), and Kevin Greene (who made the initial designs of the SUM). Because of a famous 1965 paper, Dr. Robinson is often credited with inventing unification. He is more modest, pointing to the work of Herbrand in the 1930s and the studies of Prawitz and Kenger concerning unification. Dr. Robinson contends that he was just the first to formalize the unification process and apply it to resolution.

In 1981, the Syracuse CIS logic-programming group learned that Caltech (California Institute of Technology) student Sheue-Ling Lien had designed a chip that embodied Dr. Robinson's original unification algorithm (see the "Unification on a Chip" text box, page 1 74). Dr. Robinson and his colleagues were somewhat taken aback that someone else had taken this step. Lien's report was a major inspiration for the development of the SUM, even though the chip it described was never actually made. Because ECE had been developing custom VLSI

chip-design capability and had a strong logic-programming group, combining the pursuits "seemed a natural thing" according to Dr. Oldfield.

Coprocessor Strategy

As Dr. Oldfield explains, "Although we started talking about a unification chip, following along the lines of the Caltech one. it soon became fairly clear that at present levels of integration that was fairly ridiculous. You could make a chip, but it would be limited to solving such small problems that it wouldn't be worthwhile." The SUM group wanted to design a full-blown, practical processor. Besides. Lien's chip used Dr. Robinson's original 1965 algorithm. Much more efficient algorithms have been developed since.

When they realized that a single chip wasn't realistic, the members of the group looked at the possibility of a coprocessor, initially for the LMI

Sí, queridas, podríais pensar que TPUs y NPUs y demás son una cosa acabada de inventar, pero cada vez que la IA se pone de moda, alguien piensa en hardware para acelerarla…

Siguiente cosa que me ha interesado: ¿cómo elegir lenguaje de programación?

CHOOSING A PROGRAMMING 
LANGUAGE

by Gary Elfring

It's a three-step process

IF YOU WERE a carpenter building a new house, the first thing you would do would be to collect your tools. The tools you'd select would vary depending on the type of job. The same thing should be true if you are a programmer. You have a wide range of tools available, and you just choose the right tools for the job. Your tools are the languages that you program in and the environments needed to support those languages.

How do you go about selecting the right tool for the job? There are more programming languages available for microprocessors than most people could learn in a lifetime. What you need is a methodology that can be used to select one language from all the rest for a given application.

This article presents a practical method for comparing programming languages. It has an inherent bias toward compiled high-level languages. Compiled languages are faster than interpreted ones, and most interpreted languages also offer a compiler version. Since program speed is often an issue, I chose compilers over interpreters.

The actual process of evaluating a group of programming languages can be broken down into three major steps. The first step is to characterize the application the language is being selected for. Then you must identify the features that a language should have in order to deal with the previously described application. Finally, you should take into account practical considerations to further narrow down the language selection.

The Application

You can't choose a tool unless you know what you intend to do with it. You have to describe your application. Once you have this information you can then proceed to determine whether or not the existing language choices are the right tools for the job. To describe an application, you must consider both the type and size of the application. These questions must be answered before you can proceed any further in the language evaluation:

What is the type or class of application? What level of language is needed?

There are a number of different classes of program applications. An application can belong to a single class or several. Identifying the class of your application is relatively simple and helps narrow the list of acceptable languages. Some of the more common classes include scientific, business, and system programming; text processing; expert systems; and real-time control.

Most programming languages are better suited to solving one particular class of problem than another. COBOL is one example. While it is easy to write maintainable business programs with COBOL, no one would expect to use this language to solve real-time control problems.

Another consideration is the level of programming that the application will require. If you need low-level control of various machine-dependent features, then a very high level language...

La cosa comienza dando preferencia a compilados sobre interpretados por temas de velocidad (cosa más importante hace cuarenta años que ahora, que les pregunten a JavaScript y Python). Sigue proponiendo que el tipo de programa es muy importante (y dando COBOL como ejemplo de lenguaje para aplicaciones de negocios), y a continuación proponiendo si lenguajes de alto o bajo nivel… Comencé a leer el artículo pensando que lo que dijese sería siendo bastante actual. Curiosamente, donde uno esperaba más estabilidad… va a ser que no. Pero claro, entonces llega este artículo sobre componentes reutilizables:

SOFTWARE-ICs

by Lamar Ledbetter and Brad Cox

A plan for building reusable software components

THE SOFTWARE WORLD has run headlong into the Software Crisis—ambitious software projects are hard to manage, too expensive, of mediocre quality, and hard to schedule reliably. Moreover, all too often, software delivers a solution that doesn't meet the customers' needs. After delivery, if not before, changing requirements mean that systems must be modified.

We must build systems in a radically different way if we are going to satisfy tomorrow's quantity and quality demands. We must learn to build systems that can withstand change.

Some system developers are already building software much faster and of better quality than last year. Not only that, the systems are much more tolerant of change than ever before, as a result of an old technology called message/object programming. This technology, made commercially viable because of the cost/performance trends in hardware, holds the key to a long-awaited dream— software reusability. A new industry is developing to support the design, development, distribution, and support of reusable Software-ICs (integrated circuits). A forthcoming series in UNIX/World will address message/object programming.

Message/Object Programming and software-ICs

In this article we'll look at the concepts of message/object programming and how they support the building of "Software-ICs," as we call them, by satisfying the requirements for reusability.

A Software-lC is a reusable software component. It is a software packaging concept that combines aspects of subroutine libraries and UNIX filter programs. A Software-IC is a standard binary file produced by compiling a C program generated by Objective-C.

The notion of objects that communicate by messages is the foundation of message/object programming and fundamental to Software-ICs. An object includes data, a collection of procedures (methods) that can access that data directly, and a selection mechanism whereby a message is translated into a call to one of these procedures. You can request objects to do things by sending them a message.

Sending a message to an object is exactly like calling a function to operate on a data structure, with one crucial difference: Function calls specify not what should be accomplished but how. The function name identifies specific code to be executed. Messages, by contrast, specify what you want an object to do and leave it up to the object to decide how.

Requirements for Reusability

Only a few years ago, hardware designers built hardware much as we build software today. They assembled custom circuits from individual electrical components (transistors, resistors, capacitors, and so on), just as we build functions out of low-level components of programming languages (assignment statements, conditional statements, function calls, and so on). Massive reusability of hardware designs wasn't possible until a packaging technology evolved that could make the hardware environment of a chip (the circuit board and adjoining electrical components)...

Ojo a los dos primeros párrafos:

El mundo del software ha chocado con la Crisis del Software: los proyectos ambiciosos son difíciles de gestionar, demasiado caros, de calidad mediocre y difíciles de programar con fiabilidad. Además, con demasiada frecuencia, el software ofrece una solución que no satisface las necesidades de los clientes. Tras la entrega, o incluso antes, los cambios en los requisitos obligan a modificar los sistemas.

Debemos construir sistemas de una forma radicalmente diferente si queremos satisfacer las demandas futuras de cantidad y calidad. Debemos aprender a construir sistemas que resistan el cambio.

¿Escritos en 1985? ¿1995? ¿2025? ¿Nos jugamos algo a que los podremos reutilizar sin tocar una coma en 2065?

En fin… Si queréis saltar de este mes de junio del 85 a nuestra relectura del número de mayo, aquí lo teneís. y el mes que viene, más (espero).

Byte, mayo del 85

Seguimos leyendo la revista Byte, pero cuarenta años tarde (todas las entradas disponibles en la etiqueta Byte). Si queréis pasar de mis extractos y acudir a la fuente, tenéis todo el histórico en archive.org y el número del mes aquí.

Este mes la cosa sera (relativamente breve). La portada, no especialmente destacable:

Portada de la revista Byte de abril de 1985, dedicada al PC Unix de AT&T

De las noticias, me quedo con un breve. ¿Sabíais que este mes se cumplen cuarenta años del anuncio de Excel? ¿Sabíais que Microsoft lo lanzó inicialmente para el Mac? Pues sí…

Integrated Software for Macintosh

Microsoft's first integrated package. Excel for the Macintosh, has spreadsheet and graphics capabilities, a spreadsheet oriented database, and a macro facility for storing and recalling commonly used keystrokes. It supports the AppleTalk network and provides two-way file compatibility with Multiplan and Chart for the Macintosh, Lotus 1-2-3 for the IBM PC. and applications that support Microsoft's SYLK format.

The Excel spreadsheet provides you with a 256 column by 16.384-row work area. You can view and reference multiple spreadsheets, consolidate worksheets, enter multiple variable problems or situations, and vary the borders, number formats, and font styles and size. You can assign names to cell references, numbers, and mathematical expressions and call four windows into a worksheet. You can produce instant "what if' graphics with Excels charting abilities, which are functionally identical to Microsoft's Chart for the Mac. Excel files can be read directly into Chart, and Excel can read Chart files When you alter numbers in a spreadsheet window, charts in separate windows are instantly updated. For data comparisons, you can open more than one chart window for the same or different data. The charting facility also has 42 predesigned charts, the ability to relocate objects on screen, and your choice of font, range, scale, and patterns.

The database is an ancillary function of Excels spreadsheet. With it, you can sort, extract, and display information in a variety of ways. The database lacks form- and report-design capabilities: however. Excels formatting capabilities let you create reports. It does let you remove data for analysis in a different section of your work area.

Excels suggested retail price is $395. It requires 512K bytes of memory and will work with the Macintosh XL.

¿Se fiaba mucho Microsoft del éxito de Excel? Pues no sé, pero unas páginas más adelante la empresa anunciaba su solución de hoja de cálculo para el Mac:

Anuncio de Microsoft de su hoja de cálculo para el Mac, Multiplan, más Microsoft Chart para hacer gráficas. Se hace eco de que Multiplan utiliza todas las características amigables del Mac. También anuncian que están disponibles para el Mac Word, File y BASIC.
¿Créiais que lo del caos de productos de las grandes tecnológicas era cosa de ahora? Pues hace cuarenta años Microsoft hacía publicidad de su hoja de cáculo Multiplan para Mac mientras anunciaba el lanzamiento de Excel para la misma plataforma…

Y en la lista de productos tenemos nada más y nada menos que el PC AT de IBM, con su 80286 y sus 256 KBs de RAM en el modelo básico (apenas cuatro mil dólares de la época):

IBM PC AT

The PC gets down to business

The IBM PC AT comes in two basic configurations. The basic model ($3995) comes with 256K bytes of RAM (random-access read/write memory), one of IBM's new high-capacity 1.2-megabyte disk drives, and a combination floppy disk/hard-disk controller card. Available for an additional $1800, the enhanced model adds 256K bytes of memory, a 20-megabyte hard-disk drive, and a serial/parallel interface adapter (see photo 1). Both systems are based on Intel's 80286 processor and have eight I/O (input/output) expansion slots and a battery-backed clock/calendar.

The AT comes with IBM's usual voluminous documentation. It includes a setup guide, an operations guide, and a BASIC manual, all in IBM's standard boxed looseleaf format. An unwelcome addition is a variety of small pamphlets packed in each box. While these are intended to be helpful quick guides, they are easy to misplace and might confuse as much as inform.

By the way, the BASIC manual is now complete. You don't have to send in a coupon and replace pages to get up-to-date documentation.

Power Supply and Keyboard

The power supply is 190 watts, as opposed to the 63 watts in the PC and 130 watts in the XT. This much power is needed. The PC is underpowered, causing many users to have hard-to-trace problems when adding to their systems. The XT's supply is much better but would be inadequate for the AT's two hard-disk drives. Since what goes in as electricity always comes out as heat, IBM has incorporated an innovative variablespeed fan that runs faster (and louder) as the internal temperature rises. Since my system was lightly loaded, the noise level never became obtrusive. A notable addition to the AT is a line-voltage select switch that lets it run on European 220-volt power.

The AT's keyboard and interface are more sophisticated than those on the PC and they are not compatible. You cannot use an AT keyboard on a PC. A single-chip microcomputer on the system board manages the keyboard and related functions. Any PC software that goes directly to the keyboard interface hardware, some key-translation programs, and many games will not work on the AT.

The keyboard layout is similar to that used on an IBM Selectric typewriter (see photo 2). The Shift, Control, Enter, and backspace keys have all been enlarged. Some of the less frequently used keys, such as backslash, grave accent. Print Screen, and Escape, have been moved to peripheral portions of the keyboard.

Three status lights have been added to the Caps Lock, Scroll Lock, and Num Lock keys— this is a welcome feature. The only new key, Sys Req, causes the keyboard handling software that's in ROM (read-only memory) to generate a software interrupt whenever the key is pressed or released. This lets the user signal the operating system for attention. PC-DOS currently ignores Sys Req.

To go with its international power supply. IBM provides six different versions of the AT's keyboard for foreign languages. The layout and internal scan codes are all identical, but some of the key legends are different to permit use of symbols peculiar to specific languages. The standard display adapters can display these characters, and DOS 3.0 has a set of utilities to adapt itself to the specific keyboard type.

On the output side, the AT uses the standard PC display cards and so is completely compatible. Graphics generation is much faster than it is on the PC.

Much has been said about the inclusion of a key switch that disables the keyboard and locks the cover in place. It seems to me that this feature is of limited usefulness. You would have to secure the entire system and external wiring to prevent someone with malicious intent from interfering with a running program. A program can test the state...

Por cierto… ¿lo de la compatibilidad de los PCs? El AT venía con un teclado nuevo, y el software para los PCs anteriores de IBM que accedía directamente a sus teclados (juegos, por ejemplo)… no funcionaba en el AT. El paso del 8086 al 286, para sorpresa de nadie, también daba sus propios problemas.

El AT, por cierto, era el primero en poder usar los novísimos discos de alta densidad, con sus casi infinitos 1,2 megabytes (hablamos de discos de 5¼ pulgadas, claro), a 500 kilobits por segundo. Sorpresa: las unidades de disco también suponían un problema de compatibilidad: si escribías a un disco de doble densidad, este no necesariamente sería legible en otros PCs. ¿Sistemas operativos? PC DOS 3.0 (no, no MS DOS) o el Concurrent DOS de Digital Research.

Y de los anuncios, me quedo con el SORD IS-11C. ¿No os provoca una cierta ansiedad la bisagra de la pantalla?

Anuncio del IS-11C de Sord Compuetr. Se trata de un sistema de procesador de textos y correo elctrónico portátil programable en BASIC. La bisgra que soporta la pantalla es de menos de la mitad del ancho de dicha pantalla.

Dice la Wikipedia que llevaba un Z80 y que la resolución de la pantalla era de 256 × 64. No habla del peso ni de la batería (algo me dice que solo funcionaba enchufado a la corriente).

Y con esto me paso a la sección de programación, donde encontramos, primero…

0.8660254 ≈ √3/2

An algorithm that converts decimals to fractions

Si hoy en día incluimos en una revista un programa para convertir números decimales a fracciones con la capacidad de reconocer al menos unas cuantas raíces cuadradas (el programa reconocía las raíces de 2 , 3 y 5, y π y π2), explotan cráneos (el autor, por más inri, era un estudiante de medicina de Estocolmo). Y por si esto no fuese suficiente…

Computing Pi

Using infinite series to compute mathematical functions

(El programa usa la serie de Taylor de la arcotangente y da quince cifras de π, más que suficientes para básicamente cualquier cálculo práctico.)

En fin. Volvemos, con un poco de suerte, el mes que viene. Si queréis anticipar el «futuro», podéis hacer trampa aquí.

Byte, abril del 85

Continuamos con el proyecto de leer mensualmente la revista Byte… de hace cuarenta años (tenéis las entradas anteriores en la etiqueta Byte del blog, aunque a ver si encuentro el momento de currarme un índice un poco más currado (que muy probablemente solo usaría yo, ciertamente)).

Decía el mes pasado que este número venía cargadito, y así es:

Portada  de la revista Byte de abril de 1985. El tema de portada es la inteligencia artificial. La ilustración es iuna mano humana dibujando una mano robótica junto a una mano robótica dibujando una mano humana

…pero no cargado de las cosas que suelo destacar, sino de una buena cantidad de artículos sobre IA. Pongo yo unos cuantos «plus ça change» en estas entradas, pero en esta ocasión todo el bloque central de la revista es un «plus ça change». Tanto que, del resto, solo me voy a quedar con la inocentada:

Foto de un accesorio para el Mac que es un afilador de cuchillos. El texto que acompaña a la foto (en inglés) es el siguiente:

Knife the Mac

Ennui Associates has announced MacKnifer,a hardware attachment that mounts on the side of your Macintosh and sharpens knives, scissors, lawn-mower blades – anything that needs sharpening. With MacKnifer's patented double-action grinding wheel, you can easily sharpen any utensil in less time than it takes the Mac to open a file. According to the manufacturer, MacKnifer is so easy to use that you can opearte it within 30 minutes of taking it out of the box. Turn your spare computing time into extra cash with a knife-sharpening business on the side... of your Macintosh.

For more information on MacKnifer, contact Ennui Associates, 52502 Marginal Avenue, Somnolencia, CA, 90541.

Aquí la entrada correspondiente de hoaxes.org: https://hoaxes.org/af_database/permalink/the_macknifer, por si a alguien le hiciese falta.

(Por cierto, he decidido cambiar de «proveedor» para las revistas, a esta página de archive.org, y en la medida de lo posible (léase, cuando me acuerde) intentaré enlazar los artículos y piezas que comente.)

Entrando en materia, la cosa comienza con nada más y nada menos que Marvin Minsky:

COMMUNICATION WITH ALIEN INTELLIGENCE
by Marvin Minsky

It may not be as difficult as you think

WHEN FIRST WE MEET those aliens in outer space, will we and they be able to converse? I believe that, yes, we will— provided they are motivated to cooperate— because we'll both think in similar ways. I propose two kinds of arguments for why those aliens may think like us, in spite of having very different origins. These arguments are based on the idea that all intelligent problem solvers are subject to the same ultimate constraints – limitations on space, time, and materials. For animals to evolve powerful ways to deal with such constraints, they must have ways to represent the situations they face, and they must have processes for manipulating those representations. These two requirements are:

Economics: Every intelligence must develop symbol systems for representing things, causes, and goals, and for formulating and remembering the procedures it develops for achieving those goals.

Sparseness: Every evolving intelligence will eventually encounter certain very special ideas— e.g., about

arithmetic, causal reasoning, and economics— because these particular ideas are very much simpler than other ideas with similar uses.

The economics argument is that the power of a mind depends on how it manages the resources it can use. The concept of thing is indispensable for managing the resources of space and the substances that fill it. The concept of goal is indispensable for managing how we use the time we have available—both for what we do and what we think about. Aliens will use these notions too, because they are both easy to evolve and because there appear to be no easily evolved alternatives for them.

The sparseness theory tries to make this more precise by showing that almost any evolutionary search will soon find certain schemes that have no easily accessible alternatives, that is, other different ideas that can serve the same purposes. These ideas or processes seem to be peculiarly isolated in the sense that the only things that resemble them are vastly more complicated. I will discuss only

the specific example of arithmetic and conjecture that those other concepts of objects, causes, and goals have this same island-like character.

Critic: What if those aliens have evolved so far beyond us that their concerns are unintelligible to us and their technologies and conceptions have become entirely different from ours?

Then communication may be infeasible. My arguments apply only to those stages of mental evolution in...

Artificial-intelligence pioneer Marvin Minsky is Donner Professor of Science in the Department of Electrical Engineering and Computer Science at Massachusetts \nstitute of Technology (545 Technology Square, Cambridge, MA 02139). Ik the late 1950s, Minsky, together with John McCarthy [now at Stanford), created MIT's AI laboratory, of which Minsky was the director for several years. Minsky has long been interested in SETI [the Search for Extraterrestrial Intelligence) and participated in the important 1971 conference on communication with extraterrestrials, held in Soviet Armenia and organized by Carl Sagan.

Minsky fue cofundador del laboratorio de IA del MIT, había recibido el premio Turing en 1969, inventó el primer «head mounted display», codiseñó con Seymour Papert la tortuga de Logo, y para su tesis doctoral construyó a principios de los años cincuenta SNARC, uno de los primeros intentos de construir una máquina que imitara el comportamiento del cerebro humano, diseñada para simular una red neuronal, específicamente un conjunto de neuronas artificiales interconectadas, que emulaba el comportamiento de ratas recorriendo laberintos, y aprendía gradualmente a encontrar el camino correcto basándose en recompensas (lo que ahora llamamos aprendizaje por refuerzo). Ojo: Minsky (fallecido en 2016) estuvo asociado con Jeffrey Epstein y estuvo en su isla privada, aunque la mujer de Minsky, que estuvo allí con él, defiende que nunca hizo nada moralmente cuestionable allí.

Minsky, que estaba muy interesado en SETI, el proyecto para buscar vida extraterrestre, plantea en el artículo su hipótesis de que toda inteligencia, alienígena o no, debe ser similar y que, por tanto, no debería ser muy difícil la comunicación, a no ser que la otra inteligencia haya ido más allá del estado de preocuparse por su supervivencia, la comunicación y expandir su control del mundo físico. Para ello se apoya en un experimento mental de exploración de máquinas de Turing, y en la universalidad de la aritmética, para acabar llegando a la inevitabilidad, a su vez, de muchos aspectos del lenguaje (el razonamiento me suena a Chomsky, por algún motivo). No me atrevo para nada a resumir ni a juzgar el artículo, pero es curioso combinar la IA de la inteligencia artificial con la IA de la inteligencia alienígena, cuando menos…

THE QUEST TO UNDERSTAND THINKING

Roger Schank and Larry Hunter

It begins not with complex issues but with the most trivial of processes

ARTIFICIAL INTELLIGENCE, or AI, takes as its subject matter some of the most daunting questions of our existence. What is the nature of mind? What are we doing when we are thinking, feeling, seeing, or understanding? Is it possible to comprehend how our minds really work? These questions have been asked for thousands of years, but we've made little tangible progress at answering them.

AI offers a new tool for those pursuing the quest: the computer. As anyone who has used one can attest, computers often create more problems than they solve. But for probing the issues of mind and thought, that is just what we need.

The fundamental use of computers in helping us understand cognition is to provide a testbed for our ideas about what the mind does. Theories of mind often take the form of process descriptions. For example, a theory of question answering might claim that people first translate a question into an internal representation, use that representation as an index into memory, translate the recalled memory into an appropriate form for an answer, and then generate the words to communicate it. (This example is offered not as a real theory of question answering but as an example of what a process theory of mind might look like.)

Process theories seem to be a good way of describing what might go on inside the brain. One problem with them, however, is that all too often what looks like a good description really isn't specific enough to make the theory clear. "Use the representation as an index into memory" isn't a good explanation of the processes behind remembering a fact. How are facts recalled? How is the memory organized? What happens when memory gets very large? What if a fact isn't directly encoded in memory but can be inferred from something that is? A researcher trying to write a program that embodies the above simplistic theory would run into all of these problems and more. That's why we need to write programs. Programming forces us to be explicit, and being explicit forces us to confront the problems with our theories.

Not long ago, AI researchers like ourselves focused on what they considered to be manifestations of highly intelligent behavior; playing chess, proving mathematical theorems, solving complex logical puzzles, and the like. Many AI researchers devoted a lot of energy to these projects and found powerful computational techniques for accomplishing such "intelligent" tasks. But we discovered that the techniques we developed are not the same ones that people actually use to perform these tasks, and we have instead begun to concentrate on tasks that almost any adult finds trivial: using language, showing common sense, learning from past experiences.

Language

We began studying these "trivial" tasks by trying to write programs that...

El siguiente artículo también tiene autores «wikipediables»: Roger Schank se doctoró en lingüística después de graduarse en matemáticas, fue profesor de informática y psicología en Yale , donde en 1981 fundó el Yale Artificial Intelligence Project y en 1989 haría lo mismo con el Institute for the Learning Sciences de Northwestern. Investigaba sobre comprensión del lenguaje natural y razonamiento basado en casos. Y, me temo, no solo conocía también a Epstein (ayuda que este se dedicase de vez en cuando a financiar investigación en IA), como Minsky, sino que le mostró su apoyo cuando comenzó a destaparse el pastel :-S. Lawrence Hunter, por su parte, se dedica hoy en día a la biología computacional, campo al que llegó a través del razonamiento basado en casos para el diagnóstico del cáncer de pulmón.

¿Y el artículo? El artículo toca, primero, un tema que se me antoja vital y, a la vez, absolutamente ausente del debate actual: cómo la inteligencia artificial podría ser una muy buena herramienta para ayudar a entender qué es y cómo funciona la inteligencia «natural», y luego se centra en algunos de los problemas de procesar el lenguaje natural, como la ambigüedad, el contexto o la memoria (la de recordar, no necesariamente la RAM).

Me estoy pasando con la cuenta de palabras, o sea que solo citaré The LISP tutor y PROUST, An automatic debugger for Pascal programs, que como podrá imaginar el lector, se centran en los usos , que ahora parecen más cercanos, pero ya veremos, de la IA para enseñar a programar y ayudarnos a programar.

Y cerramos con…

LEARNING IN PARALLEL NETWORKS

Simulating learning in a probabilistic system

THE BRAIN is an incredibly powerful computer. The cortex alone contains over 10^10 neurons, each connected to thousands of others. All of your knowledge is probably stored in the strengths of these connections, which somehow give you the effortless ability to understand English, to make sensible plans, to recall relevant facts from fragmentary cues, and to interpret the patterns of light and dark on the back of your eyeballs as real three-dimensional scenes. By comparison, modern computers do these things very slowly, if at all. They appear very smart when multiplying long numbers or storing millions of arbitrary facts, but they are remarkably bad at doing what any five-year-old can.

One possible explanation is that we don't program computers suitably. We are just so ignorant about what it takes to understand English or interpret visual images that we don't know the appropriate data structures and procedures to put into the machine. This is what most people who study artificial intelligence (AI) believe, and over the last 20 years they have made a great deal of progress in reducing our ignorance in these areas.

Another possible explanation is that brains and computers work differently. Perhaps brains have evolved to be very good at a particular style of computation that is necessary in everyday life but hard to program on a conventional computer. Perhaps the fact that brains store knowledge as connection strengths makes them particularly adept at weighing many conflicting and cooperating considerations very rapidly to arrive at a common-sense judgment or interpretation. Of course, any style of computation whatsoever can be simulated by a digital computer, but when one kind of machine simulates a very different kind it can be very slow. To simulate all the neurons in a human brain in real time would take thousands of large computers. To simulate all the arithmetic operations occurring in a Cray would take billions of people.

It is easy to speculate that the brain uses quite different computational principles, but it is hard to discover what those principles are. Empirical studies of the behavior of single

neurons and their patterns of connectivity have revealed many interesting facts, but the underlying computational principles are still unclear. We don't know, for example, how the brain represents complex ideas, how it searches for good matches between stored models of objects and the incoming sensory data, or how it learns. In this issue Jerome A. Feldman describes some current ideas about how parallel networks could recognize objects (see "Connections" on page 277). I will describe one old and one new theory of how learning could occur in these brain-like networks. Please remember that these theories are extreme idealizations; the real brain is much more complicated.

Associating Inputs with Outputs

Imagine a black box that has a set of input terminals and a set of output

… nada más y nada menos que un Nobel de física (y premio Turing, y premio Príncipe de Asturias, y no sé cuántos premios más), Geoffrey Hinton. Lo de darle un Nobel en física a un graduado en física y doctor e investigador en IA es algo en lo que no entraré ahora, pero marcarse el punto de publicarle cuando era un mero profesor ayudante en Carnegie Mellon, junto a figuras al menos aparentemente de mucho más relumbrón que él, me lo vais a tener que reconocer, no está nada mal. Más si lo que está explicando es, si no lo he entendido mal, el trabajo en entrenamiento de redes neuronales que es uno de los pilares por los que ha acabado ganando todo el reconocimiento y los premios con los que cuenta.

Y no me alargo más, pero toda la tabla de contenidos del especial merece como mínimo una ojeada rápida…

Y, en cualquier caso, que cuarenta años no son nada.

Si queréis seguir leyendo, aquí tenéis mis notas sobre el número de marzo. Y el mes que viene, con un poco de suerte, más.

Ray-ban Meta: accidentalmente accesibles

A finales de 2023, cuando salieron las Ray-ban Meta, escribí una cosilla breve sobre lo mucho que me cabreaba que a Meta no se le hubiese pasado por la cabeza la potencial utilidad del cacharrito para las personas con discapacidad visual (tampoco es que el tema apareciese en los medios en los que yo vi noticias al respecto, debe decirse).

Ayer me pasaba hermanito la evaluación de las smart glasses Ray-ban Meta que han hecho desde la ONCE (no he sabido encontrar la fecha en que se hizo). Concluyen:

Las gafas Ray-Ban Meta Smart Glasses no son un dispositivo específicamente diseñado para personas con ceguera, aunque presentan algunas funcionalidades que podrían resultar interesantes para este colectivo.

Si bien la aplicación Meta View es en gran medida accesible a través de lectores de pantalla y las gafas ofrecen la posibilidad de realizar llamadas, videollamadas, enviar mensajes y reproducir audio, la utilidad para personas ciegas se ve limitada por la imposibilidad de utilizar plenamente la IA para la descripción de imágenes y la lectura de textos en español.

A pesar de estas limitaciones, la posibilidad de realizar videollamadas y permitir que el interlocutor vea a través de la cámara de las gafas podría ser una característica valiosa para algunas personas ciegas, así como la posibilidad de realizar diferentes funciones sólo con la voz permitiendo al usuario tener las manos libres.

Con la llegada de Meta IA a Europa quiero pensar que a la barrera del idioma no le queda mucho para caer, pero me da a mí que a Meta no le dará por promocionar los usos para esas personas (aunque sería una excelente ocasión para lavar un poco la imagen que tiene la empresa).

Y solo nos queda desear que alguien sea capaz de sacar un producto equivalente pero abierto, aunque sea cien, doscientos, o trescientos euros más caro.



PS 20250528 A mediados de este mes de mayo Meta ponía en marcha algunas funcionalidades de accesibilidad muy interesantes que había anunciado en noviembre. De momento solo en algunos países anglosajones, pero algo es algo.