Everything is an AI remix

Llevaba yo tiempo (meses) con una pestaña del navegador abierta en la última versión de ese maravilloso vídeo que es el Everything is a remix

…con la intención de buscarle la relectura «en tiempos de IA generativa». Y resulta ser que en el último enlace del #67 de Interaccions, la maravillosa newsletter de Sergi Muñoz Contreras, este me descubre que, de hecho, esa relectura ya se la ha hecho su propio autor, Kirby Ferguson. Y después de verla, opino que es de lo mejor que se ha dicho sobre el tema (y es que en muchísimas ocasiones, incluso cuando el discurso se alinea con mis ideas, me parece que la capacidad de argumentación es escasa).

Mi recomendación es que, aunque vieseis alguna de las múltiples versiones del Everything en su momento, la reveáis ahora con calma antes de ver el segundo vídeo.

Y solo añadiré que, siendo uno muy, muy fan del Everything, y estando bastante de acuerdo (pero no del todo) con la relectura, creo que se salta lo que yo llamo «el problema del aprendiz» (que seguro que tiene un nombre mejor): pareciéndome determinados (muchos, incluso, potencialmente) usos de la IA como herramienta (como los propuestos en AI is remixing, por ejemplo) lícitos (estoy obviando los charcos tamaño océano de la sostenibilidad y del respeto a la propiedad intelectual, ciertamente), la IA es un arma de destrucción masiva del proceso de aprendizaje que convierte al aprendiz en maestro, y que eso es algo que me aterra. Creo que lo resolveremos algún día. Pero me aterra.

Scott y Mark y la enseñanza de la programación

Scott Hanselmann es un tipo majísimo, con un podcast muy recomendable (de los últimos episodios, me quedo con el del Interim Computer Museum o el de Anne-Laure Le Cunff, pero el nivel medio es alto). Scott es, además, el vicepresidente de «developer community» de una pequeña compañía llamada Microsoft (cosa que le permitió abrir el código del BASIC de Microsoft para el 6502, que enseñó a programar a mucha gente en el Apple II, y los Commodore PET, VIC 20 y Commodore 64, con el que aprendí yo, y que con eso yo ya le estaría agradecido eternamente).

En Microsoft Scott conoce a Mark Russinovich, cuyo LinkedIn dice que es «CTO, Deputy CISO and Technical Fellow, Microsoft Azure«, pero que igual os suena más (a los que tengáis una edad, os guste la informática y uséis Windows) de SysInternals. Y Scott y Mark tienen otro podcast, muy apañado también, Scott and Mark Learn To…, que en los últimos episodios ha hablado bastante de un producto que vende intensamente Microsoft, la programación con IA generativa. Y de todos esos episodios, me quedo con este fragmento y lo que dice Russinovich hacia el final. Os dejo el vídeo en el momento indicado, acompañado de la transcripción en inglés, primero, y la traducción al español, después.

Solo comentar antes, que…

  • ….hablan de informática, pero aplica a muchos otros campos, si es que no a todos,
  • que no es la opinión más original del mundo, pero está bien que lo diga quien lo dice,
  • que lo de que las universidades no tienen un buen modelo es rigurosamente cierto, pero a ver quién es el guapo o guapa al que se le ocurre una solución que no sea brutalmente intrusiva o imposiblemente cara,
  • y que destaco un fragmento de la conversación, pero que también está muy bien (traducido: alineado con lo que pienso yo, que es un tema que también me toca de relativamente cerca) sobre las empresas y lo que buscan / deben buscar al contratar gente joven, y que en general todo el episodio, y todo el podcast, son bastante recomendables.

Y eso, os dejo con el vídeo, la transcripción y la traducción.

Otro día, más.

Transcripción

(A partir de la transcripción de YouTube, corregida y editada ligeramente por mí para hacerla algo más legible (espero). Las negritas son mías.)

—So as we end… we’re at an inflection point. What should university students that are studying CS right now, sophomores, juniors, seniors, in CS, be thinking about as we are in that point?

— I have a friend that’s got a student in computer science that’s a junior and he said he was talking to them and said asking them, do you use AI and he says, like, yeah a lot of my fellow students are using AI. I don’t use AI, because I want to learn, the hard way.

— I think both is the right answer, though, I feel.

— I think both, but here’s what I’ll tell you right now. I think that universities don’t have a good model, you know, consistent.

— They’re behind. Academia might, but research level academia.

— But not for teaching undergrads. And, actually, I think what is coming into view for me is that you need classes where using AI for certain projects or certain classes is considered cheating. Not to say that you don’t have classes and projects in some classes where the student is told to use AI, but you need to have the main basis for the education on computer science and programming to be AI-less, because that’s the only way the student’s going to learn.

— I’ve been saying «drive stick shift». And I get told that I’m being gatekeepy when I say that.

— I don’t think you are, because there is a great study of three months ago from MIT where they took, um, not students, but they took people in careers, already in the workforce, and they divided them into three cohorts and had them write essays from the SAT, and they had one cohort just do it with their own closed book, just write the essay. They had another cohort that got to use Google, and they had another cohort that got to use ChatGPT, and they looked at their EEGs, and they quizzed them afterwards, right after, and then like a week later, and the results were exactly what you would expect. The people that wrote it could answer questions about what they wrote, even a week later, and their EEGs showed that they were burning a lot of wattage. The people that were using ChatGPT, an hour after they wrote the essay, they couldn’t remember what they’d written.

— That’s the thing. It’s just not even there. That makes me really sad. I very much enjoy using AI to brainstorm, to plan, but then I want to do the writing part. To vibe your way through life has me concerned.

— You lose the critical thinking. And they call this critical thinking deficit, that is what it’s creating…

— Which we already have from social media.

— Yeah, we already have. And if you’re talking about the early and career programmers that we’ve been talking about wanting to hire at a company, you want them to know what a race condition is. You don’t want them to have vibed it and AI is like, «Yeah, a race condition. AI will fix that.» Because at some point, as we’ve said, I think with the limitations of AI and software programming, at least for the foreseeable future somebody needs to know.

Traducción

(Con ChatGPT y revisado por mí. Las negritas siguen siendo mías.)

—Así que, para cerrar… estamos en un punto de inflexión. ¿Qué deberían estar pensando los estudiantes universitarios que estudian informática ahora mismo?

—Tengo un amigo que tiene un hijo que estudia informática, está en tercer año, y me dijo que le preguntó: “¿Usas IA?” Y él respondió: “Sí, muchos de mis compañeros la usan. Yo no, porque quiero aprender por el camino difícil.”

—Creo que ambas cosas son la respuesta correcta, sinceramente.

—Sí, ambas, pero te diré algo: creo que las universidades no tienen un modelo adecuado, coherente.

—Van por detrás. Quizás la academia investigadora sí, pero…

—Pero no en la enseñanza de grado. De hecho, creo que lo que se está haciendo evidente es que necesitamos clases en las que usar IA para ciertos proyectos o asignaturas se considere hacer trampa. No porque no debas tener otras clases o proyectos donde se indique explícitamente al estudiante que use IA, sino porque la base principal de la formación en informática y programación debe ser sin IA, porque es la única forma en que el estudiante realmente aprenderá.

—Yo suelo decir “hay que aprender a conducir con cambio manual”. Y me dicen que eso es una postura elitista1.

—No creo que lo sea, porque hay un estudio excelente de hace tres meses del MIT donde tomaron… no estudiantes, sino profesionales ya en activo, y los dividieron en tres grupos para que escribieran ensayos del tipo de la selectividad. A un grupo le dijeron que lo hiciera sin ayuda, a otro que podía usar Google, y a otro que podía usar ChatGPT. Luego midieron sus electroencefalogramas y los evaluaron justo después y una semana más tarde. Los resultados fueron exactamente los que esperarías: las personas que escribieron el ensayo por sí mismas eran capaces de responder preguntas sobre lo que habían escrito incluso una semana después, y sus elecroencefalogramas mostraban mucha actividad cerebral. En cambio, quienes usaron ChatGPT, una hora después ya no recordaban lo que habían escrito.

—Eso es. Es que ni siquiera está ahí. Y eso me pone muy triste. Me gusta mucho usar la IA para generar ideas, para planificar, pero luego quiero escribir yo. Esa actitud de “vibear”2 la vida me preocupa.

—Se pierde el pensamiento crítico. Y eso está generando un déficit de pensamiento crítico…

—Que ya teníamos por culpa de las redes sociales.

—Sí, ya lo teníamos. Y si hablamos de los programadores jóvenes o principiantes que queremos contratar en una empresa, quieres que sepan lo que es una condición de carrera (race condition). No quieres que lo hayan “vibeado” y que la IA les diga: “Sí, una condición de carrera, la IA lo arreglará.” Porque, como ya hemos dicho, con las limitaciones de la IA en la programación de software, al menos en el futuro cercano, alguien tiene que saberlo.

  1. «Gatekeepy» en el original. En este caso «to gatekeep» sería poner barreras de acceso innecesarias, o «pedir carnets». ↩︎
  2. «Vibear» es mi traducción de «to vibe code«, crear programas a base de prompts a IAs generativas, sin escribir una línea de código. ↩︎

Byte, octubre del 85

Portada de la revista Byte de octubre de 1985. El tema de portada es Simulating Socienty. Lo ilustra una hoja de papel de impresora que envuelve unas caras humanas.

Vamos allá con nuestra relectura de lo último en informática…de hace cuarenta años, a través de los archivos de la revista Byte en archive.org. Hoy toca octubre de 1985.

Para comenzar, no os quejéis de que no estáis presenciando los grandes avances de la historia. Os presento… ¡el disquete de alta densidad! (Creo que la mayoría de los que me leéis ya sois talluditos y apreciaréis que saltar de 720 kilobytes a 1.44 megas, sin ser revolucionario, sí fue todo un salto.)

Sony, Toshiba Prepare High-Density 3 ½ inch Disks

Sony announced in Tokyo that it has developed a 2-megabyte 3½ inch floppy disk, storing 1.6 megabytes (formatted) by doubling the number of sectors per track. The 2-megabyte medium uses a 1 micron magnetic layer (half the thickness of current 1 -megabyte disks) and requires a higher coercivity (700 rather than 600-620 oersteds).

While the 2-megabyte versions use the same magnetic technology as earlier 3 ½-inch disks and drives, the magnetic heads of the drives require higher tolerances. An additional disk cartridge hole allows drives to distinguish between 1- and 2-megabyte disks.

Although it has already licensed 38 companies to produce 2-megabyte disks, Sony says it is waiting for formal standards to be set before marketing the disks and drives, which should be available to OEMs next year, probably at prices about 20 percent higher than 1-megabyte versions.

An even denser 3 ½-inch drive from Toshiba uses perpendicular recording technology to squeeze 4 megabytes of data onto a single-sided disk coated with barium ferrite. Toshiba plans to release evaluation units early next year, with full production slated for 1987

While the 2-megabyte versions use the same magnetic technology as earlier 3 '/2-inch disks and drives, the magnetic heads of the drives require higher tolerances. An additional disk cartridge hole allows drives to distinguish between 1- and 2-megabyte disks.

Although it has already licensed 38 companies to produce 2-megabyte disks, Sony says it is waiting for formal standards to be set before marketing the disks and drives, which should be available to OEMs next year, probably at prices about 20 percent higher than I -megabyte versions.

An even denser 3 '/2-inch drive from Toshiba uses perpendicular recording technology to squeeze 4 megabytes of data onto a single-sided disk coated with barium ferrite. Toshiba plans to release evaluation units early next year, with full production slated for 1987.

Que levante la mano quien supiese / recordase que antes de Access, la base de datos de Microsoft (que no llegaría hasta 1992), hubo un Microsoft Access para conectarse a servicios de información a través del módem (yo no tenía ni idea / no lo recordaba en absoluto). La hegemonía del Access base de datos es tal que apenas he sido capaz de encontrar más información al respecto.

Anuncio de Microsoft Access. Lo ilustra un ordenador sobre el que hay el auricular de un teléfono de sobremesa, roto por la mitad. El titular es Don't get mad, get Access

En nuestra habitual sección «crees que esto se acaba de inventar, pero no» tenemos a la sección de libros, que se hace eco de Computer culture : the scientific, intellectual, and social impact of the computer, disponible, como no, en archive.org, que recogía las ponencias de la conferencia del mismo nombre, porque no es solo en Despacho 42 que nos preocupamos de estos temas y que, naturalmente, ya se preocupaba del impacto de la IA…

Artificial Intelligence

Approximately one-fourth of Computer Culture (four papers and one panel discussion) deals specifically with artificial intelligence. The panel discussion on the impact of Al research is the most thought-provoking contribution in the book. As you might expect, this discussion is not so concise as an article dealing with the same topic, but the interaction among the panel members is intriguing. The panel consists of two philosophers (Hubert Dreyfus and John Searle) and three computer scientists (John McCarthy, Marvin Minsky, and Seymour Papert). Much of the discussion is spent identifying important questions about Al. Each panelist has a distinct viewpoint, resulting in a diversity of questions. Among these, however, two issues are of overriding concern: Can machines think? If they can, is machine thinking the same as human thinking?

The panelists seem to agree that computers can be used to study thinking, if for no other reason than to provide a contrast with human thought processes. On the other hand, the suggestion that appropriately programmed computers could duplicate human thought processes is much more controversial.

Aside from the philosophical issues, Papert makes a very important point when he argues that it is dangerous to reassure people that machines will never be able to challenge the intellectual capabilities of human beings. If people are lulled into a sense of security about machine capabilities, they will be ill prepared to deal with situations in which machines become better than people at doing specific jobs, he says. Whether or not the machines are described as thinking in these situations, the social and psychological issues raised by machine capabilities demand attention.
(Enlazo a la página de portada de la sección de libros, en vez de la específica del fragmento que tenéis aquí. En cualquier caso, vale la pena leer la crítica completa… e incluso el libro, si tenéis la oporunidad)

Más cosas que no se inventaron ayer. Uno ve poco fútbol del de darle patadas a un balón, pero bastante fútbol americano, un deporte en que las retransmisiones no serían lo mismo sin la obligatoria skycam, ua cámara que sobrevuela el terreno de juego colagada de cuatro cables. Y sí, cumple cuarenta años:

Skycam: An Aerial Robotic Camera System

A microcomputer provides the control to add three-dimensional mobility to TV and motion picture cameras

On a morning in March 1983, a group of technicians gathered at Haverford High School in a suburb of Philadelphia. Each brought an electrical, mechanical, or software component for a revolutionary new camera system named Skycam (see photo 1). Skycam is a suspended, mobile, remote-controlled system designed to bring three-dimensional mobility to motion picture and television camera operation. (See the text box on page 128.) I used an Osborne 1 to develop Skycam's control program in my basement, and it took me eight months of evenings and weekends. As of 3 a.m. that morning, however, the main control loop refused to run. But 19 hours later, Skycam lurched around the field for about 15 minutes before quitting for good. Sitting up in the darkness of the press booth, hunched over the tiny 5-inch screen, 1 could see that the Osborne 1 was not fast enough to fly the Skycam smoothly.

In San Diego 18 months later, another group of technicians opened 20 matched shipping cases and began to get the Skycam ready for an NFL preseason game between the San Diego

Chargers and the San Francisco FortyNiners. The Skycam was now being run by an MC68000 microprocessor based Sage computer, and a host of other improvements had been made on the original. [Editor's note: The Sage Computer is now known as the Stride: however, the machine used by the author was purchased before the company's name change. For the purpose of the article, the machine will be referred to as the Sage.] For the next three hours, Skycam moved high over the field fascinating the fans in the stadium while giving the nationwide prime-time TV audience their first look at a new dimension in sports coverage.

Skycam represents an innovative use of microcomputers. The portable processing power needed to make Skycam fly was unavailable even five years ago. That power is the "invention" upon which the Skycam patents are based. It involves the support and free movement of an object in a large volume of space. The development team used the following experiment to test the movement and operation of the Skycam.

At a football field with one lighting tower at each of four corners, the team members bolted a pulley to the top of each pole, facing inward. Then they used four motorized winches, each with 500 feet of thin steel cable on a revolving drum and put one at the base of each tower.

Next, they ran a cable from each motor to the top of its tower and threaded the cable through the pulley. They pulled all four cables from the tops of the towers out to the middle of the field and attached the cables to a metal ring 2 feet in diameter weighing 10 pounds (see figure 1). A motor operator was stationed at each winch with a control box that enabled the operator to slowly reel in or let out the cable. Each motor operator reeled the cable until the ring was suspended a few feet from the ground, and then they were ready to demonstrate Skycam dynamics.

All four motor operators reeled in the cable. The ring moved upward quickly. If all four motors reel in at the same rate (and the layout of lighting towers is reasonably symmetrical) the ring will move straight up. In the experiment, the two motors on the left reeled in and the two on the right reeled out. The ring moved to the left and maintained its altitude. An instruction was given to the two motor operators on the left to reel out and the two on the right to reel in just a little bit. The ring moved right and descended as it moved back toward the center.

The theoretical basis of this demonstration is quite simple. For each point in the volume of space bounded by the field, the four towers and the plane of the pulleys, there is a unique set of four numbers that represents the distances between that point and each of the four pulley positions. Following the layout above for an arbitrary point on the field, you can...

Pero este mes me quedo con el tema de portada: el uso de simulaciones informáticas para modelar la sociedad:

Simulating Society

THE NEED FOR GREATER RIGOR in the social sciences has long been acknowledged. This month's theme examines computer-based simulation as a means to achieving that end. Simulation may be able to assist in evaluating hypotheses, not in the sense that an experiment in the physical sciences can test a hypothesis, but in the sense of making plain the ramifications of a hypothesis. The value of specifying a hypothesis with sufficient clarity to be amenable to programming and of examining the consequences of that hypothesis should not be underestimated. Indeed, one of the interesting aspects of the work presented here is that these researchers appear to be developing a tool for the social sciences that is not simply a poor stepchild of physical science methodologies.

Our first article, "Why Models Go Wrong" by Tom Houston, is a wonderfully readable account of the ways that you can misuse statistics.

Next, Wallace Larimore and Raman Mehra's "The Problem of Overfitting Data" discusses a difficult but important topic. Overfitting happens when your curve traces the noise as well as the information in your data. The result is that the predictive value of the curve actually deteriorates.

In "Testing Large-Scale Simulations," Otis Bryan and Michael Natrella show how validation (determining whether the specification for the simulation corresponds with reality) and verification (determining whether the simulation program corresponds with the specification) were achieved on a large-scale combat simulation they developed for the Air Force.

The ways of economic modeling are illustrated by Ross Miller and Alexander Kelso, who show how they analyzed the effects of proposed taxes for funding the EPA Superfund in "Analyzing Government Policies."

Michael Ward discusses his ongoing research in simulating the U.S.-Soviet arms race in "Simulating the Arms Race."

Several authors discuss new and surprising applications of simulation. In "EPIAID," Dr. Andrew Dean describes the development of computer-based aids for Centers for Disease Control field epidemiologists. Royer Cook explains how he fine-tuned a model in "Predicting Arson," and Bruce Dillenbeck, who uses an arson-prediction program in his work as a community activist, discusses modeling in "Fighting Fire with Technology"

Articles in other sections of the magazine that relate to this theme include Zaven Karian's review of GPSS/PC and Arthur Hansen's Programming Insight "Simulating the Normal Distribution."

When I began researching this theme, I took an excellent intensive course in simulation from Edward Russell of CACI. Dr. Russell's is the unseen hand guiding the development of this theme. Of course, any blame for bias in the choice of theme topics belongs to me, but much of the credit for the quality that is here must reside with him.

No os perdáis los artículos sobre los problemas, comenzando por los dos que abren la sección, sobre los riesgos del mal modelado (un tema que, desafortunadamente, tiene hoy todavía más importancia que hace cuarenta años), y siguiendo con el de modelado económico con Lotus 1-2-3, o el de epidemiología.

Ah, y aprovechando que la cosa iba de modelado… ¿sabíais que SPSS/PC+, no solo ya existía en 1985, sino que fue lanzado en 1968? Si a alguien se le ocurre un software que lleve más tiempo en el mercado, que avise.

Anuncio del programa SPSS/PC+. El eslogan es Make Stat Magic. Lo ilustra la foto de un sombrero de copa, como los de los magos, del que sale un disquete de 5¼ etiquetado SPSS/PC+

Y no vamos a dejar de hablar del Amiga, claro. Esta vez, es Bruce Webster, otro de los columnistas estrella de la revista, el que nos explica lo mucho que ha alucinado con la potencia, el precio y la elegancia del sistema:

According to Webster

Commodore's Coup

Product of the Month: Amiga

Last month, I made a few comments about the future of the home computer market, based on rumors I had heard about the Amiga from Commodore. In essence, I said that if what I had heard was true the Amiga might be the heir to the Apple II in the home/educational/small business marketplace.

Since writing that. 1 have seen the Amiga. I have watched demonstrations of its abilities; I have played with it myself; and I have gone through the technical manuals. My reaction: I want to lock myself in a room with one (or maybe two) and spend the next year or so discovering just what this machine is capable of. To put it another way: I was astonished. Hearing a description of a machine is one thing, seeing it in action is something else especially where the Amiga is concerned

I can tell you that the low-resolution mode is 320 by 200 pixels, with 32 colors available for each pixel (out of a selection of 4096). But that does not prepare you for just how stunning the colors are especially when they are properly designed and combined. It also doesn't tell you that you can redefine that set of 32 colors as the raster-scanning beam moves down the screen, letting you easily have several hundred colors on the screen simultaneously.

It also doesn't tell you how blindingly fast the graphics hardware is. If you've seen some of Commodore's television commercials demonstrating the Amiga's capabilities, or if you've looked at the machine yourself, you have some idea as to what the machine can do. If you haven't, I'm not sure I can adequately describe it.

Having seen the graphics on the Amiga, I have to smile when I hear people lump it together with the Atari 520ST. The high resolution mode on the ST is 640 by 400 pixels with 2 colors (out of 512); on the Amiga, it is 640 by 400 pixels with 16 colors (out of the 4096). and you can redefine those 16 colors as the raster-scanning beam goes down the screen. Also, the graphics hardware supporting all those colors is much faster. Little wonder, then, that a friend of mine, a game developer with several programs on the market, came back from the Amiga developers' seminar with plans to return the Atari ST development system at his house and to turn his attentions to the Amiga instead.

As I guessed last month, the real strength of the Amiga is its totally open architecture. An 86-pin bus comes out of one side of the machine, giving any add-on hardware complete control of the machine What's more 512 K bytes of the 68000's 16-megabyte address space have been set aside for expansion hardware, 4K bytes each for 128 devices. A carefully designed protocol tells hardware manufacturers what data they should store in ROM (read-only memory) so that the Amiga can automatically configure itself when booted. This is a far cry from the closed-box mentality of the Macintosh, which has forced many hardware vendors through weird contortions just to get their devices to talk consistently to the Mac without crashing.

The memory map is well thought out. The Amiga comes with 256K bytes of RAM (random-access read/write memory); an up...

Snif.

Si os lo leéis entero, por favor no os asustéis cuando lleguéis al momento en que comenta que la RAM está a 350 dólares (algo más de mil, actualizando la inflación) por 256 kilobytes. Vamos, que lo por lo que costaban 256 kilobytes hoy te puedes comprar unos 320.. gigabytes. Un millón a uno. (Y supongo que no os sorprenderá mucho comprobar que los márgenes de beneficio de Apple al vender RAM para sus sistemas no son una cosa del siglo XXI.)

Y lo dejamos aquí por este mes. Nos vemos el mes que viene, con el número de noviembre.

Jeff Minter y la historia de los videojuegos

A veces en el trabajo montamos cosas muy chulas… y luego a mí se me olvidan completamente. Hace ya una temporada que Joan Arnedo, entre otras cosas director del máster universitario en línea de Diseño y Programación de Videojuegos de la UOC, monta unas jornadas, RUMSXPLORA, dedicadas a conservar la memoria de los videojuegos de los 80. Para la edición del año pasado Joan buscaba ponente de lujo, y centrado en el mundo Commodore. Y discutiendo sobre el tema salió de mi boca el nombre del mítico Jeff Minter (si alguien saca un Llamasoft: The Jeff Minter Story y tu carrera en el mundillo arranca de Centipede para el ZX81, pasa por el Gridrunner para el Commodore 64 y el Attack of the Mutant Camels, sigue con cosas para el Amiga y el Atari, la Jaguar de Atari y llega hasta el día de hoy, eres una leyenda).

Y vaya usted a saber cómo, Joan engañó a Jeff para venirse a Barcelona y el resultado es esta charla. Y la charla, además, fue un fantástico repaso en primera persona a la historia y evolución de los videojuegos desde los muy primeros ochenta hasta hoy y, de regalo, contiene un recordatorio de por qué es importante conservar la memoria de aquellos programillas de cuando las memorias se medían en kilobytes.

(Estoy seguro de que en algún momento, relativamente a inicios del siglo XX, en que alguien propuso una filmoteca para conservar las primeras películas y que alguien le contestó que no había ningún valor en conservar una cosa tan burda como aquella. Es probable que si hubiésemos tenido un poco más vista, hoy conservaríamos más y mejores recuerdos de los inicios del cine. Uno, que tiene mucha fe, espera que con la historia de los videojuegos seamos más cuidadosos, pero la industria se lo trabaja todo lo que puede para quemar su propia historia (léase y léase).)

En fin, que no os perdáis la charla de Jeff Minter, por los dioses del Olimpo de los videojuegos…

(Ah, y si os aburrís, hay un episodio de un cierto podcast sobre la jornada que contiene diez minutillos de alguien hablando con la leyenda ☺️.)

Byte, septiembre del 85. Diez años de Byte

Portada de Byte de septiembre de 1985. Un tema de portada es el 'homebrewing', el otro, el supersistema de Ciarcia, con un ordenador compatible con Z80 a 6 megahercios y con 256 kilobytes de RAM

Pues vamos allá con el número de septiembre del 85 de Byte, el del décimo aniversario de la revista… En portada, un ordenador, pero uno construido por uno de los autores estrella de la revista, Steve Ciarcia, que se sacaba de la manga un ordenador de 8 bits para la era de los 16, con todo lujo de esquemas para que te lo montaras tú mismo:

BUILD THE SB180 SINGLE-BOARD COMPUTER

PART 1: THE HARDWARE 

by Steve Ciarcia

This computer reasserts 8-bit computing in a 16-bit world

Newer, faster, better. These words are screamed at you in ads and reviews of virtually every new computer that comes to market. Unfortunately, many of the proponents of this rhetoric are going on hearsay evidence. While advertising hype has its place in our culture, a more thorough investigation may lead you to alternative conclusions.

Generally speaking, quotes of increased performance are basically comparisons of CPU (central processing unit) instruction times rarely involving the operating system. The 68000 is indeed a more capable processor than the 6502, but that doesn't necessarily mean that commercial application programs always run faster because the CPU has more capability. People owning 128K-byte Macintoshes have discovered this.

The bus size of the processor is only one factor in the performance of a computer system. Operating-system design and programming styles contribute much more to the overall throughput of a computer. It is not enough to simply compare 8 to 16 bits or 16 to 32 bits. For example, the Sieve of Eratosthenes prime-number benchmark runs faster in BASIC on the 8-bit 8052-based controller board presented in last month's

Circuit Cellar than it does on a 16-bit IBM PC.

Y por si esto no fuera suficiente para la sección «cosas que no veríamos en una revista generalista de informática hoy»…

AN ANALYSIS OF SORTS

by Jonathan Amsterdam

How to choose one sorting algorithm over another

A friend told me recently that 90 percent of all the computer programs in the world sort. I can believe it. Our society's passion for organization has elevated the simple task of putting things in order to a position of major importance. And who better to carry out the job than those informational beasts of burden— computers?

Because of their significance, sorting algorithms have been thoroughly studied. Some are slow and some are fast. Some sort a few items and some sort millions of items. Here I want to discuss sorting in the context of three different algorithms: Selection Sort, for small lists. Quicksort, for larger lists, and Mergesort, for lists of a size so monstrous they can't fit into memory all at once. But first we will need to develop some simple tools to help us with our analysis of these algorithms.

Analysis

Our goal is to understand the efficiency of some sorting algorithms. But we are immediately faced with a problem: How can we study an algorithm in the abstract without considering the language it's written in or the machine it's running on? For example, any algorithm written in a high-level language will run faster when written in

assembly language. And any program running on a microcomputer would run faster on a mainframe. We want to abstract away from these facts, to talk about an algorithm's running time independent of machine or language.

Efectivamente: una discusión lo suficientemente sesuda como para una asignatura de Algoritmos de primero de carrera sobre los diferentes algoritmos de ordenación (temazo de lectura siempre muy recomendable), con sus grafiquitas sobre complejidades de tipo lineal, cuadrática y «n log n», algoritmos no tan básicos…

Figure 1: the rates of grouth of n, n log n and n squared

Listing 1: The algorithm for Selection Sort.

Selection Sort.

Input: an array, A, and its size, n.
Output: the same array A, in sorted order, 
begin for i : = 1 to n do begin 
m : = i;
for j : = i + 1 to n do 
compare A[j] to A[m], making j the new m if it is less; 
swap A[i] and A[mj; 
end 
end.

…el merge sort (o ordenamiento por mezcla),

Figure 2. The mergesort treeListing 2. The algorithm for Mergesort.

Mergesort.

Input: a list, L. 
Output: a sorted list, S. 
begin 
If L is one item long, then S = L
Otherwise, split L into two lists L1 and L2, each about half as big. 
Mergesort L1 into S1. 
Mergesort L2 into S2. 
merge S1 and S2 into S. 
end.

…o el mismísimo Quicksort:

Listing 3: The algorithm for Quicksort.

Quicksort.

Input: an array A, with items from 1 to n.
Output; the same array, sorted, 
begin
choose a pivot; 
partition the list so that all items < = pivot are < i; 
Quicksort A from 1 to i - 1 ; 
Quicksort A from i to n; 
end.

Pero la gracia de este número era la celebración del primer decenio de vida de la revista, y el consiguiente echar la vista atrás, entrevistas incluidas con el ya citado Ciarcia o Jerry Pournelle (de quin hablamos en el número de julio).

NO CELEBRATION of BYTE's 10th anniversary would be complete without the acknowledgment of some of the events and contributions that helped to shape the magazine. In the too-few pages that follow, we tried to capture some of the flavor of the past 10 years.

Special thanks to all contributors and to the BYTE staff, especially Gregg Williams, who chaired the project, Richard Shuford, Rich Malloy, Mark Welch, and Stan Wszola.

A Microcomputing Timeline
Notable Quotes
Evolution of the Microprocessor
Interview: Carl Helmers
Interview: Steve Ciarcia
Ciarcia's Prodigious Output
Interview: Robert Tinney
Tinney Favorites
Interview: Jerry Pournelle

Vale la pena ir siguiendo los enlaces a la revista que dejo en cada imagen, aunque solo sea para disfrutar de las maravillas del diseño industrial de la segunda mitad de la década de los setenta y la primera mitad de la de los ochenta…

Fotos de los ordenadores Sphere 1, Kim-1 y el Altair 8800, todos ellos de de 1975

…como el Sphere 1, el Kim-1 o el mitiquísimo Altair 8800 por ejemplo.

Recuperando temas que a veces no recordamos que vienen de muy lejos, los teclados:

Que el Keyport 717 tiene bien poco que envidiarle al más loco de los teclados actuales.

En la Kernel del mes nos encontramos con el lanzamiento de Excel en una conferencia conjunta de Microsoft y Apple porque, como igual no sabías, Excel era originalmente una aplicación para el Mac.

Kernel

The ongoing construction work at Chaos Manor made it desirable for Jerry to escape yet again. He attended a joint press conference held by Microsoft and Apple in New York. The product introduced at the conference, Excel, is a spreadsheet for the Macintosh. Comments made at the press conference caused Jerry to put down some thoughts on software integration and whether or not we need it. He also looked at several new products, including a new version of BASIC from the inventors of the language.

This being our anniversary issue, Dick Pountain brings us a condensed history of personal computing in Great Britain. He also introduces us to a rugged new lap-held portable, the Husky Hunter.

From Japan, Bill Raike sends us an abbreviated history of that country's microcomputers and also discusses an innovative new product from Brother Industries— the SV-2000 Software Vending System.

In this month's According to Webster, Bruce describes his experiences at the West Coast Computer Faire. He discovered that it isn't as much fun as it used to be, but he found some interesting products on display. He also discusses Apple's plans for the Macintosh, predicts success for the Amiga, and looks forward to testing a host of new products.

Bob Kurosaka discusses the world of transcendental numbers in Mathematical Recreations. Some of them are familiar to us, such as e, the base of natural logarithms, and ir. He looks at some hiding places for these two numbers and some ways to approximate their values.

Y no podíamos saltarnos, claro está, sobre el textito que le dedica Pournelle al Amiga, por el que vota como sucesor del Apple II a finales de los ochenta. Ojalá, Jerry. Ojalá.

Amiga

Among its other faults, Apple has been shamelessly neglecting the Apple II family, and specifically the Apple IIe. When the IIc came out a year ago, Apple cut the price of the IIe and slowed production, figuring the machine would die of its own accord. Instead, the sales jumped dramatically, easily outselling the IIe. People would see the IIc ads, come into the computer store, and walk out with a IIe. Why? Because the IIe had slots, while the IIc (like the Mac) was a closed machine. The IIe is a chameleon: With the right set of boards, you can make it look like and do just about anything. Case in point: The nicest development system I've ever used, including mainframes and minis, was an Apple IIe with 128K bytes of RAM, an AcceleratorIIe card (3.5-megahertz 65C02), and two Axlon 320K-byte RAM disks (configured as four 160K-byte floppy disks). Apple's response to the increased IIe sales was to cut back on production and raise its price (while discounting the IIc). Even so, it wasn't until late 1984 that the IIc finally started outselling the IIe.

What does this have to do with the Amiga? Well, several machines are competing in the low-end market: the Atari 520ST the Apple IIe, the Mac (to a lesser extent), and the Amiga. Guess how many of these are easily expandable? Just one: the Amiga. Guess which machine will probably end up being the Apple II of the late eighties? I don't think the IIe will, nor the Mac, and the ST is a tightly closed, nonexpandable box. My vote is for the Amiga. From what I can see, the Amiga's graphics, sound, 68000 processor, memory map (allowing up to 8 megabytes of RAM), and expansion bus give it the potential of a long and successful life. There's always the chance that Apple will, indeed, come out with a souped-up Apple II next year, but even with the Western Design Center chips (65816. etc.) and the nifty 3 /2-inch Duodisk (1.6 megabytes of storage), it will probably be too little, too late.
(Dejo la imagen enlazada a la versión grande de la imagen, y no a la fuente en el Archive (aunque siempre tenéis la opción de leer el texto alternativo de la imagen).)

Y cierro con dos piezas más. La primera, lo normal en las revistas actuales (no): el típico artículo de dedicado a los números π y e…

pi, e, and All That

Sneaking up on transcendental numbers

by Robert T. Kurosaka

God made integers, all else is the work of man.
— Kronecker

This famous quote of Leopold Kronecker serves as the starting point for this month's column. The integers (the whole numbers) can be used to construct other numbers.

We can construct rational numbers by dividing one integer by another. When we do so, we get either a terminating decimal (1/4=0.25) or a nonterminating, repeating decimal (7/18 = 0.388888 ...). Repeating, or cyclic, decimals are a fascinating study I may explore in a future column.

Irrational Numbers

We can also construct numbers that are both nonterminating and nonrepeating. It is a rather amazing notion that a string of digits may go on forever without having to establish a pattern. It's such an odd notion that the ancient Greeks originally did not believe it possible— or even imaginable. When it was established that the square root of 2 was such a number, the Greeks called this kind of number irrational. The root meaning of irrational is "without ratio,'' or unable to be expressed as a fraction. The Greeks found such numbers irrational not only in the sense of "non-ratio-able" but also in the sense of "nonsensical."

The differences between rational and irrational numbers are substantial. It can be shown that no more rational numbers exist than do whole numbers, but irrational numbers outnumber rational numbers. This fact, which is often presented as a paradox, is not especially surprising when you look at how we have constructed rational numbers. They are built up out of whole numbers and can be expressed as integer fractions. As I said above, irrational numbers cannot be so expressed.

TWO TYPES

There are two different kinds of irrational numbers. The first, like the square root of...

…y el segundo dedicado al Versabraille II, un ordenador diseñado para funcionar usando braille, porque la preocupación por la accesibilidad tampoco es nueva:

VersaBraille II

Telesensory Systems has introduced the VersaBraille II system, a portable, disk-based electronic information processor for the blind. This braille computer lets you electronically store, process, and retrieve information. A special telephone modem can link VersaBraille II to other computers.

VersaBraille II consists of a standard 3 '/2-inch microfloppy-disk system and a braille display that substitutes for a video monitor. Its memory holds up to 30.000 characters; disk support boosts the unit's capacity to 77,000 characters. This is adequate for many word-processing procedures, such as formatting, high-speed searching, and inserting, deleting, and relocating text. The system can simultaneously output braille and print information.

VersaBraille II is fully programmable. Menus guide the user to each of the system's programs. The manufacturer provides special software that converts VersaBraille II into a four-function calculator with algebraic logic, floating decimal point, square root, and percent. Plans for other software packages include a 50,000word spelling checker, a two-way braille translator. and a language interpreter.

The price of a VersaBraille II system is S6995 plus shipping and handling.

He encontrado poca información sobre el Versabraille II, pero si alguien quiere investigar sobre su antecesor, el Versabraille original, aquí un documento por el que comenzar.

Apa. Volvemos el mes que viene con el número de octubre. Por cierto, que si alguien quiere hacer los deberes por su cuenta, además del archivo de la revista en el Archive, también tenéis esta chulada de navegador que me pasó hermanito hace unas semanas.

¡Hasta la próxima!