Byte, noviembre del 85

¡Voy tarde! Es diciembre y en la portada de Byte dice que aún es noviembre (sí, de 1985, claro). Anyway, vamos allá, de urgencia, con el repaso a la revista

La portada de la revista Byte noviembre de 1985. El precio es de tres dólares con noventa y cinco. El tema de portada es Inside the IBM PCs. La ilustración, en blanco y negro, es una figura humana (parece un hombre vestido de traje con un maletín en la mano, frente a un enorme ordenador tipo PC de la época que parece desmontarse en una especie de puzzle tridimensional

Y comenzamos con el que sigue siendo el tema, en 2025, de revistas de informática, entradas de blogs y vídeos de YouTube a tutiplén: utilidades de dominio público:

Public-Domain Utilities

Build an extensive software library for free

by Jon R. Edwards

THE EXTENSIVE public domain collection for the IBM Personal Computer and compatibles is a very valuable resource. It is easily possible to build an extensive software library and incorporate the utilities into your home projects or to save considerable time and effort by installing a RAM (random-access read/write memory)-disk and print spooler. Most programs in the public domain provide source code; you can learn from the code and, more important, you can customize the routines for your own requirements. Undoubtedly, some of the software will fill your needs, and the more obscure programs may simply trigger your imagination.

The notion that "free means shoddy" does not necessarily apply to this software. I suspect that most of the free utilities were originally written to fill individual needs and as part of the "hacker ethic" have been shared with the public. The programs adequately fill many needs, and they have a tendency, as the user community modifies and expands them, to become more and more bug-free and sophisticated. Most public-domain programs provide limited functionality, and their user interfaces and documentation are generally less polished than commercial products, but it is amazing how many commercial products do very little more than integrate the capabilities of programs that already exist in the public domain. If nothing else, exposure to these programs will make you more aware of what to look for and expect from the products you buy. And who knows —in the short descriptions that follow, you may find software that's perfectly suited to your needs. At least the price is right.

Free Software

To the best of my ability, I have concentrated on free, no-strings-attached software and not on shareware or user-supported software. There is, to be sure, a growing amount of shareware for the IBM family, and much of it is excellent (see "Public-Domain Gems" by John Markoff and Ezra Shapiro, March BYTE, page 207), but the products often do not provide source code, and their authors usually request a contribution; most users legitimately feel that the products deserve financial support.

Naturally, I cannot guarantee that the software you download will function as you hope it will. I certainly hope you find dozens of interesting utilities here and that your investigations lead you to new and exciting things, but I take no responsibility if the programs you download do nothing or turn your screen inside out.

Locating free software is getting easier and easier. There are more users groups, bulletin-board systems (BBSs), and public-domain copying services than ever before, and the...

Cuarenta años más tarde seguimos igual de locos por obtener utilidades gratuitas y seguimos teniendo que explicar que «gratis» no necesariamente es «malo». Es curioso, eso sí, comprobar que en 1985 había que explicar que muchas de las utilidades venían con su código fuente («código abierto» se puso de moda a finales de los noventa, dice la Wikipedia). Y a uno le entran sudores fríos pensando en descargarse software de BBS a través de los módems de la época (por mucho que los programas pesaran entonces una miseria al comparalos con los actuales).

Si hacéis click en la página y seguís leyendo encontraréis utilidades de disco, de memoria, de estatus del sistema, de ayuda para el uso del teclado, de manipulación de texto y de archivos, de control de pantalla, pequeñas aplicaciones, utilidades de impresión, software de comunicaciones o lenguajes de programación (Forth, LISP, Logo). Lo de siempre: hemos cambiado, en cuarenta años, pero no tanto como uno podría imaginar.

Creo que llevábamos un tiempo sin fijarnos en la publicidad:

Diez megas en 8 minutos son algo más de 20 kilobytes por segundo (mi conexión de fibra da fácilmente 50 megabytes por segundo, o bastante más de 20 gigas en 8 minutos, y los puertos USB 3 llegan a los 500 megabytes por segundo) por apenas 180 dólares de la época (460 euros de hoy). Quejaos de que el pen USB os va lento y es caro, va… Y si seguimos con el tema, podemos repasar las velocidades de los discos de la época en general:

Factors Affecting Disk Performance

Four major physical factors determine overall disk performance: access time, cylinder size, transfer rate, and average latency.

Access time is the amount of time it takes to move the read/write heads over the desired tracks (cylinders). Once the heads are over the desired tracks, they must settle down from the moving height to the read/write height. This is called the settling time and is normally included in the access time. Specifications for AT and XT disk-drive options are shown in table A.

A cylinder is composed of all tracks that are under the read/write heads at one time. Thus, tracks per cylinder is the same as the number of data heads in the drive. Cylinder size is defined as tracks/cylinder x sectors/track x bytes/sector.

The Quantum Q540, for example, has four platters and eight data heads,

while the Vertex VI 70 has four platters, seven data heads, and one servo head. The difference is that the Quantum drive uses an embedded (or wedge) servo, where the servo signal is embedded on the data tracks, preceding the data portion of each sector on the disk. The Vertex drive uses a dedicated servo that requires its own surface. This difference means that the Quantum drive has 8.5K bytes more data available to it before it must seek the next track; if all other factors were equal (which they aren't), the Quantum would be slightly faster in those cases that required reading that "extra" 8.5K bytes.

Transfer rate is the rate at which data comes off the disk. It depends on rotation rate, bit density, and sector interleaving. The first two factors are practically the same for all AT-compatible 5!4-inch hard disks, but not for all

floppy disks (the AT's spin 20 percent faster than the other PC floppies).

Sector interleaving is used to cut down the effective transfer rate. The interleave factor of 6 used on the XT cuts the effective transfer rate from 5 megabits per second to 0.833 megabit per second. Note that embedded servo disks, such as those used in the XT and the AT, actually spin about 1 percent slower than 3600 revolutions per minute (rpm) to allow for the increased density due to the servo.

Average latency is the time required for a disk to spin one-half of a revolution. For hard disks, which spin at 3600 rpm, the average latency is 8.33 ms (1/3600 rpm x 60 seconds/minute x 0.5 = 8.33 ms per half revolution). This is due to the fact that after the heads finish seeking and settling, you must wait for the required sector to come under the heads.

¿Lo más rápido de la época? 300 kilobytes por segundo. Y ni siquiera me siento viejo recordándolo… ¿Que a qué precio salían, decís?

Four Hard Disks For Under $1000

Inexpensive help for your disk storage space woes

by Richard Grehan

IF YOU ARE a peruser of the back pages of BYTE like most of us. you cannot have failed to notice the plummeting prices of hard-disk systems, particularly those available for the IBM Personal Computer. It is commonplace to find a complete subsystem, including hard disk, controller card, and software, for under $1000.

The advantages of a hard disk should be obvious: Its speed, convenience, and storage space eliminate most of the agonies involved with managing a large pile of floppy disks. If you're interested in setting up a personal bulletin-board system, the purchase of a hard-disk system should be your top priority.

I selected four hard-disk systems from the pages of BYTE and other computer periodicals. My only criterion was that the complete system must cost less than $1000. This article by no means exhausts all the under-$1000 hard disks advertised, but it should give you an idea of some possible trade-offs and troubles if you decide that a hard disk is your PC's next peripheral. Performance and price information is provided in table 1.

The Sider

The Sider is from First Class Peripherals, a Carson City, Nevada, company. An external drive, it is consequently the easiest of the four to install. This also means that the drive has its own power supply; the only added power burden to your PC is the interface card. Additionally, since the Sider does not replace one of your system's floppy-disk drives (all of the internal drives reviewed install in place of one floppy-disk drive), you lose no functionality when you need 
to, say, copy one floppy disk to another. Best of all, you are spared the trouble of digging through the technical manuals to discover which switches on the PC's motherboard you have to flip to configure the IBM as a one-floppy system.

The Sider comes in a rather large (7 1/2 inches tall, I6 1/2 inches long, and 3 1/2 inches wide) creamwhite molded-plastic housing. The hard disk is mounted on its side, and the mechanism is convection-cooled via the case's slotted top. (This slotted top warrants caution: Small objects and certainly fluids could be unwittingly dropped into the inner workings of the unit, inflicting heaven knows what damage.) Since the unit is taller than it is wide, I experienced a notunjustified fear of knocking it over. A rather stiff but comfortably long cable connects the drive to the interface card. The installation and operation guide that comes with the Sider is a small 31 -page booklet. It is clear and easy to read, obviously written for people with an absolute minimum of hardware knowledge. It includes numerous illustrations of what goes where an

Sí. Menos de mil dólares (más de dos mil quinientos de hoy con la inflación) es «inexpensive». ¿Por qué capacidades? 800 dólares te dan un disco externo (súper llevable: 19 por 42 por 9 centímetros, más o menos; no me atrevo a consultar el peso) de diez megas y que «solo» hace falta encender 30 segundos antes que el ordenador (lo juro, haced clic en la imagen, pasad página y leed). Uno de los internos, el SyQuest (compañía que duraría hasta su bancarrota en 1998), llega a la barbaridad de 30 megabytes #madreDelAmorHermoso. Y si hay que economizar, tenéis el Rodime, que os da 10 megas por apenas 500 dólares. Me los quitan de las manos. Bendita ley de Moore (y familia).

¿Otra cosa que no es exactamente reciente? Dame un problema, no importa qué problema, y alguien te lo resolverá con una hoja de cálculo:

Circuit Design with Lotus 1-2-3

Use the famous spreadsheet to design circuits and print out schematics

by John L. Haynes

SPREADSHEETS, especially those with graphics, are not just for business applications; they can be of great help to circuit designers or anyone else designing systems that can be described by equations.

As an example, let's take a look at the application of one spreadsheet, Lotus 1-2-3, to one technical problem, electronic circuit design and analysis. We'll look at both digital and linear circuits.

Digital Circuits

Digital circuits are built from logic building blocks— inverters, NAND gates, flip-flops, etc. We can simulate each of these components with the equations in a cell of a spreadsheet, using the spreadsheet's built-in logical operators shown in figure 1. For instance, in the spreadsheet portion of Lotus 1-2-3, the equivalent of an inverter is the operator #NOT#, structured as #NOT#(A= 1). This structure means the state of the operator #NOT# is not true, or equal to a logical 0, if the state in the parentheses is true. This is equivalent to the output of an inverter circuit whose input is A. Similarly, the model of a NAND gate, #NOT# (A=1#AND#B = 1). is not true if input A and input B are both true. The flip-flop is a bit more complex, since its output depends not only on its input conditions but on the transition of a clock pulse. For simplicity, let's assume that there is a narrow clock pulse that triggers the flip-flop whenever the clock pulse is true— in other words, whenever its logic state is a logical 1. The Q output remains in its present state until the clock is true; it then assumes the state of the input D. The O' output is the logical opposite of Q.

These actions are easily simulated using the logical @IF function. It is structured as @IF(AB,C) and means IF A THEN B ELSE C. That is, if the logical condition of A is true, then the function equals B. Otherwise, the function equals C. Setting the variables as @IF(C= 1 , D,Q). we can interpret the state of the function as: If the clock C is true, the state is equal to D; otherwise, it remains Q. The Q' output is handled with the #NOT# operator.

Given the ability to simulate logic components with spreadsheet functions and operators, let's now look at how we can use this technique to build a simple digital circuit. The synchronizing circuit of figure 2 is a commonly encountered arrangement. Known variously as an edge detector, a synchronizing circuit, and a digital differentiator, it develops a pulse one clock period long when an external,

Diseño de circuitos electrónicos con Lotus 1-2-3. En serio. No es una inocentada. O sí, pero suprema.

Y recupero mi tema fetiche, «cosas que ni en broma se publicarían hoy en día en una revista generalista»:

One Million Primes Through the Sieve

Generate a million primes on your IBM PC without running out of memory

by T. A. Peng

A POPULAR WAY to benchmark microcomputers is with the Sieve of Eratosthenes. It is a simple and effective method for generating prime numbers. However, if you try to use the Sieve to obtain more than a few thousand primes on your IBM PC, you will soon encounter the dreaded phrase, "Out of memory." You would think, then, that as far as microcomputers are concerned, the Sieve of Eratosthenes would be an impractical way to generate a large number of primes. This is not so. Let me show you how to use the Sieve to generate a million primes on your microcomputer. Listing 1 (written in Microsoft BASIC) illustrates how, with very little memory, you can put 500.000 numbers through the Sieve to obtain all the primes less than 1,000,000. The idea is quite simple. Use an array of flags to represent the first 1000 odd numbers. After the nonprimes among them have been sieved out, reinitialize the array to represent the next 1000 odd numbers. Lines 120 through 140 initialize the array and lines 340 through 360 reinitialize it before you use it for the next 1000 numbers. The largest prime whose square is less than 1,000,000 is 997 and it is the 168th primestarting with the prime 2. To generate all the primes less than 1,000,000, you don't have to use primes larger than 997, This is the reason for line 220 and for the size of two of the arrays in line 110. The loop in lines 240 through 270 flags all numbers less than 1000 that do not yield primes. (We have K = I + nP, so that K + K + 1 = (I + I + 1) + 2nP = P(2n + 1), which is not a prime.) After each loop is executed, the value of K will be greater than 1000 (and K would flag the next number if the size of the array were larger) and this is remembered as K(C). The variable C keeps count of the primes generated with C - 1 as the actual number of primes generated at the end of each loop. Line 390 assures that the value of K lies between 1 and 1000. You need line 460 to give the correct value for the prime Q in line 490. All the variables except C, Q, and R are integer-valued. There is a reason for this. If the program executes correctly, the output of line 540 should read, "999,983 is the 78,498th prime and the largest less than 1,000,000."

It is clear how to modify listing 1 to generate all the primes less than 2,000,000 or even 10,000,000, but to get a predetermined number of primes, we need to know a little about their distribution. Specifically, what we need to know is the size of the arrays K and P and the largest prime to be used in the Sieve. And in order to know this, we must have a rough idea of how large the...

La criba de Erastótenes, amigas y amigos. Que, por cierto, no es un algoritmo especialmente complicado de entender (dejamos como ejercicio para la lectora girar la página e intentar entender el código en BASIC de la siguiente página :-)). Ahora me han enrado ganas de comprobar cuánta RAM consume el programita en Python que genera ChatGPT en menos tiempo del que necesitarías para teclear las tres primeras líneas del programa propuesto en la revista… pero no las suficientes como para hacerlo de verdad O:-).

Y para cerrar… la multitarea:

Top View

IBM's long-awaited multitasking program makes its debut

BY TJ Byers

TOPVIEW is a multitasking program that, for $149, enables your IBM Personal Computer to install more than one program in the system. This is different from the window programs that presently claim to accomplish the same thing. When working with windows, you must quit a program before you can begin another. With TopView, however, you don't have to quit either one of them. Both can be resident on the screen— and. more important, in the microprocessor—at the same time.

Multitasking

TopView's multitasking capabilities allow several programs to run simultaneously (see photo 1). This isn't the same thing as switching between programs without quitting them; it means that you can actually have one program running in the background while using another. Let's say, for example, that you need to calculate a large spreadsheet, and the job will take several minutes. Instead of staring idly at the screen while the computer crunches away, you can banish the spreadsheet to TopView's background mode and proceed to work on another program— the computer will handle both tasks at the same time. While one program is making calculations in the background, the other can be receiving data from the keyboard. You lose no time waiting for one program to finish before you start the other.

Multitasking is not a new concept. Mainframe computers have used multitasking for many years to enhance their performance. What is new, however, is putting multitasking capabilities into a personal computer.

TopView brings multitasking to the IBM PC using a multiplexing technique known as time slicing. Basically, TopView divides the microprocessor's time into slots during which it switches rapidly from one program to another. The time slices are very short, on the order of milliseconds, and the switching action is not apparent to either the application program or the user, so the programs appear to be running concurrently on the machine. In actuality, they are processed consecutively in very quick order. The procedure gives a single computer the ability to run more than one program at a time.

Multitasking is not without its faults, however. While one program is being processed, the others are held in suspension. Consequently, the programs tend to run more slowly. The more programs you have running at the same time, the slower each apparently becomes. A quick benchmark test using TopView to conduct a simple word search of Writing Assistant on an IBM PC AT showed that it took a full 14 seconds to search a typical 3000-word file as...

Y es que, en 1985, que un ordenador personal fuese capaz de ejecutar múltiples programas en paralelo no era exactamente trivial. Tanto no lo era que no resultaba descabellado cobrar 150 dólares por el programa para hacerlo. Aunque te redujese un 75% el rendimiento del software (cosa que solo ibas a notar cuando ejecutases programas intensivos en cálculo, claro, pero eras tú quien tenía que pensar en ello) o se te comiese buena parte de la RAM del ordenador.

Por cierto: las interfaces «de ventanas» de la época no tenían precio (aunque, de hecho, hoy se están poniendo los programas «TUI», en un maravilloso retorno al pasado :-)).

Un par de fotos de los intentos de mostrar varias aplicaciones en pantalla usando una interfaz puramente textual. No me veo capaz de hacer una descripción fidedigna.

En fin, lo dejamos aquí, que vamos tarde. El mes que viene Dentro de unos días (seguramente semanas), más.

Como de costumbre, tenéis los archivos de la revista Byte en archive.org, y si queréis, podéis ir avanzando trabajo con el número de diciembre.

Everything is an AI remix

Llevaba yo tiempo (meses) con una pestaña del navegador abierta en la última versión de ese maravilloso vídeo que es el Everything is a remix

…con la intención de buscarle la relectura «en tiempos de IA generativa». Y resulta ser que en el último enlace del #67 de Interaccions, la maravillosa newsletter de Sergi Muñoz Contreras, este me descubre que, de hecho, esa relectura ya se la ha hecho su propio autor, Kirby Ferguson. Y después de verla, opino que es de lo mejor que se ha dicho sobre el tema (y es que en muchísimas ocasiones, incluso cuando el discurso se alinea con mis ideas, me parece que la capacidad de argumentación es escasa).

Mi recomendación es que, aunque vieseis alguna de las múltiples versiones del Everything en su momento, la reveáis ahora con calma antes de ver el segundo vídeo.

Y solo añadiré que, siendo uno muy, muy fan del Everything, y estando bastante de acuerdo (pero no del todo) con la relectura, creo que se salta lo que yo llamo «el problema del aprendiz» (que seguro que tiene un nombre mejor): pareciéndome determinados (muchos, incluso, potencialmente) usos de la IA como herramienta (como los propuestos en AI is remixing, por ejemplo) lícitos (estoy obviando los charcos tamaño océano de la sostenibilidad y del respeto a la propiedad intelectual, ciertamente), la IA es un arma de destrucción masiva del proceso de aprendizaje que convierte al aprendiz en maestro, y que eso es algo que me aterra. Creo que lo resolveremos algún día. Pero me aterra.

Scott y Mark y la enseñanza de la programación

Scott Hanselmann es un tipo majísimo, con un podcast muy recomendable (de los últimos episodios, me quedo con el del Interim Computer Museum o el de Anne-Laure Le Cunff, pero el nivel medio es alto). Scott es, además, el vicepresidente de «developer community» de una pequeña compañía llamada Microsoft (cosa que le permitió abrir el código del BASIC de Microsoft para el 6502, que enseñó a programar a mucha gente en el Apple II, y los Commodore PET, VIC 20 y Commodore 64, con el que aprendí yo, y que con eso yo ya le estaría agradecido eternamente).

En Microsoft Scott conoce a Mark Russinovich, cuyo LinkedIn dice que es «CTO, Deputy CISO and Technical Fellow, Microsoft Azure«, pero que igual os suena más (a los que tengáis una edad, os guste la informática y uséis Windows) de SysInternals. Y Scott y Mark tienen otro podcast, muy apañado también, Scott and Mark Learn To…, que en los últimos episodios ha hablado bastante de un producto que vende intensamente Microsoft, la programación con IA generativa. Y de todos esos episodios, me quedo con este fragmento y lo que dice Russinovich hacia el final. Os dejo el vídeo en el momento indicado, acompañado de la transcripción en inglés, primero, y la traducción al español, después.

Solo comentar antes, que…

  • ….hablan de informática, pero aplica a muchos otros campos, si es que no a todos,
  • que no es la opinión más original del mundo, pero está bien que lo diga quien lo dice,
  • que lo de que las universidades no tienen un buen modelo es rigurosamente cierto, pero a ver quién es el guapo o guapa al que se le ocurre una solución que no sea brutalmente intrusiva o imposiblemente cara,
  • y que destaco un fragmento de la conversación, pero que también está muy bien (traducido: alineado con lo que pienso yo, que es un tema que también me toca de relativamente cerca) sobre las empresas y lo que buscan / deben buscar al contratar gente joven, y que en general todo el episodio, y todo el podcast, son bastante recomendables.

Y eso, os dejo con el vídeo, la transcripción y la traducción.

Otro día, más.

Transcripción

(A partir de la transcripción de YouTube, corregida y editada ligeramente por mí para hacerla algo más legible (espero). Las negritas son mías.)

—So as we end… we’re at an inflection point. What should university students that are studying CS right now, sophomores, juniors, seniors, in CS, be thinking about as we are in that point?

— I have a friend that’s got a student in computer science that’s a junior and he said he was talking to them and said asking them, do you use AI and he says, like, yeah a lot of my fellow students are using AI. I don’t use AI, because I want to learn, the hard way.

— I think both is the right answer, though, I feel.

— I think both, but here’s what I’ll tell you right now. I think that universities don’t have a good model, you know, consistent.

— They’re behind. Academia might, but research level academia.

— But not for teaching undergrads. And, actually, I think what is coming into view for me is that you need classes where using AI for certain projects or certain classes is considered cheating. Not to say that you don’t have classes and projects in some classes where the student is told to use AI, but you need to have the main basis for the education on computer science and programming to be AI-less, because that’s the only way the student’s going to learn.

— I’ve been saying «drive stick shift». And I get told that I’m being gatekeepy when I say that.

— I don’t think you are, because there is a great study of three months ago from MIT where they took, um, not students, but they took people in careers, already in the workforce, and they divided them into three cohorts and had them write essays from the SAT, and they had one cohort just do it with their own closed book, just write the essay. They had another cohort that got to use Google, and they had another cohort that got to use ChatGPT, and they looked at their EEGs, and they quizzed them afterwards, right after, and then like a week later, and the results were exactly what you would expect. The people that wrote it could answer questions about what they wrote, even a week later, and their EEGs showed that they were burning a lot of wattage. The people that were using ChatGPT, an hour after they wrote the essay, they couldn’t remember what they’d written.

— That’s the thing. It’s just not even there. That makes me really sad. I very much enjoy using AI to brainstorm, to plan, but then I want to do the writing part. To vibe your way through life has me concerned.

— You lose the critical thinking. And they call this critical thinking deficit, that is what it’s creating…

— Which we already have from social media.

— Yeah, we already have. And if you’re talking about the early and career programmers that we’ve been talking about wanting to hire at a company, you want them to know what a race condition is. You don’t want them to have vibed it and AI is like, «Yeah, a race condition. AI will fix that.» Because at some point, as we’ve said, I think with the limitations of AI and software programming, at least for the foreseeable future somebody needs to know.

Traducción

(Con ChatGPT y revisado por mí. Las negritas siguen siendo mías.)

—Así que, para cerrar… estamos en un punto de inflexión. ¿Qué deberían estar pensando los estudiantes universitarios que estudian informática ahora mismo?

—Tengo un amigo que tiene un hijo que estudia informática, está en tercer año, y me dijo que le preguntó: “¿Usas IA?” Y él respondió: “Sí, muchos de mis compañeros la usan. Yo no, porque quiero aprender por el camino difícil.”

—Creo que ambas cosas son la respuesta correcta, sinceramente.

—Sí, ambas, pero te diré algo: creo que las universidades no tienen un modelo adecuado, coherente.

—Van por detrás. Quizás la academia investigadora sí, pero…

—Pero no en la enseñanza de grado. De hecho, creo que lo que se está haciendo evidente es que necesitamos clases en las que usar IA para ciertos proyectos o asignaturas se considere hacer trampa. No porque no debas tener otras clases o proyectos donde se indique explícitamente al estudiante que use IA, sino porque la base principal de la formación en informática y programación debe ser sin IA, porque es la única forma en que el estudiante realmente aprenderá.

—Yo suelo decir “hay que aprender a conducir con cambio manual”. Y me dicen que eso es una postura elitista1.

—No creo que lo sea, porque hay un estudio excelente de hace tres meses del MIT donde tomaron… no estudiantes, sino profesionales ya en activo, y los dividieron en tres grupos para que escribieran ensayos del tipo de la selectividad. A un grupo le dijeron que lo hiciera sin ayuda, a otro que podía usar Google, y a otro que podía usar ChatGPT. Luego midieron sus electroencefalogramas y los evaluaron justo después y una semana más tarde. Los resultados fueron exactamente los que esperarías: las personas que escribieron el ensayo por sí mismas eran capaces de responder preguntas sobre lo que habían escrito incluso una semana después, y sus elecroencefalogramas mostraban mucha actividad cerebral. En cambio, quienes usaron ChatGPT, una hora después ya no recordaban lo que habían escrito.

—Eso es. Es que ni siquiera está ahí. Y eso me pone muy triste. Me gusta mucho usar la IA para generar ideas, para planificar, pero luego quiero escribir yo. Esa actitud de “vibear”2 la vida me preocupa.

—Se pierde el pensamiento crítico. Y eso está generando un déficit de pensamiento crítico…

—Que ya teníamos por culpa de las redes sociales.

—Sí, ya lo teníamos. Y si hablamos de los programadores jóvenes o principiantes que queremos contratar en una empresa, quieres que sepan lo que es una condición de carrera (race condition). No quieres que lo hayan “vibeado” y que la IA les diga: “Sí, una condición de carrera, la IA lo arreglará.” Porque, como ya hemos dicho, con las limitaciones de la IA en la programación de software, al menos en el futuro cercano, alguien tiene que saberlo.

  1. «Gatekeepy» en el original. En este caso «to gatekeep» sería poner barreras de acceso innecesarias, o «pedir carnets». ↩︎
  2. «Vibear» es mi traducción de «to vibe code«, crear programas a base de prompts a IAs generativas, sin escribir una línea de código. ↩︎

Byte, octubre del 85

Portada de la revista Byte de octubre de 1985. El tema de portada es Simulating Socienty. Lo ilustra una hoja de papel de impresora que envuelve unas caras humanas.

Vamos allá con nuestra relectura de lo último en informática…de hace cuarenta años, a través de los archivos de la revista Byte en archive.org. Hoy toca octubre de 1985.

Para comenzar, no os quejéis de que no estáis presenciando los grandes avances de la historia. Os presento… ¡el disquete de alta densidad! (Creo que la mayoría de los que me leéis ya sois talluditos y apreciaréis que saltar de 720 kilobytes a 1.44 megas, sin ser revolucionario, sí fue todo un salto.)

Sony, Toshiba Prepare High-Density 3 ½ inch Disks

Sony announced in Tokyo that it has developed a 2-megabyte 3½ inch floppy disk, storing 1.6 megabytes (formatted) by doubling the number of sectors per track. The 2-megabyte medium uses a 1 micron magnetic layer (half the thickness of current 1 -megabyte disks) and requires a higher coercivity (700 rather than 600-620 oersteds).

While the 2-megabyte versions use the same magnetic technology as earlier 3 ½-inch disks and drives, the magnetic heads of the drives require higher tolerances. An additional disk cartridge hole allows drives to distinguish between 1- and 2-megabyte disks.

Although it has already licensed 38 companies to produce 2-megabyte disks, Sony says it is waiting for formal standards to be set before marketing the disks and drives, which should be available to OEMs next year, probably at prices about 20 percent higher than 1-megabyte versions.

An even denser 3 ½-inch drive from Toshiba uses perpendicular recording technology to squeeze 4 megabytes of data onto a single-sided disk coated with barium ferrite. Toshiba plans to release evaluation units early next year, with full production slated for 1987

While the 2-megabyte versions use the same magnetic technology as earlier 3 '/2-inch disks and drives, the magnetic heads of the drives require higher tolerances. An additional disk cartridge hole allows drives to distinguish between 1- and 2-megabyte disks.

Although it has already licensed 38 companies to produce 2-megabyte disks, Sony says it is waiting for formal standards to be set before marketing the disks and drives, which should be available to OEMs next year, probably at prices about 20 percent higher than I -megabyte versions.

An even denser 3 '/2-inch drive from Toshiba uses perpendicular recording technology to squeeze 4 megabytes of data onto a single-sided disk coated with barium ferrite. Toshiba plans to release evaluation units early next year, with full production slated for 1987.

Que levante la mano quien supiese / recordase que antes de Access, la base de datos de Microsoft (que no llegaría hasta 1992), hubo un Microsoft Access para conectarse a servicios de información a través del módem (yo no tenía ni idea / no lo recordaba en absoluto). La hegemonía del Access base de datos es tal que apenas he sido capaz de encontrar más información al respecto.

Anuncio de Microsoft Access. Lo ilustra un ordenador sobre el que hay el auricular de un teléfono de sobremesa, roto por la mitad. El titular es Don't get mad, get Access

En nuestra habitual sección «crees que esto se acaba de inventar, pero no» tenemos a la sección de libros, que se hace eco de Computer culture : the scientific, intellectual, and social impact of the computer, disponible, como no, en archive.org, que recogía las ponencias de la conferencia del mismo nombre, porque no es solo en Despacho 42 que nos preocupamos de estos temas y que, naturalmente, ya se preocupaba del impacto de la IA…

Artificial Intelligence

Approximately one-fourth of Computer Culture (four papers and one panel discussion) deals specifically with artificial intelligence. The panel discussion on the impact of Al research is the most thought-provoking contribution in the book. As you might expect, this discussion is not so concise as an article dealing with the same topic, but the interaction among the panel members is intriguing. The panel consists of two philosophers (Hubert Dreyfus and John Searle) and three computer scientists (John McCarthy, Marvin Minsky, and Seymour Papert). Much of the discussion is spent identifying important questions about Al. Each panelist has a distinct viewpoint, resulting in a diversity of questions. Among these, however, two issues are of overriding concern: Can machines think? If they can, is machine thinking the same as human thinking?

The panelists seem to agree that computers can be used to study thinking, if for no other reason than to provide a contrast with human thought processes. On the other hand, the suggestion that appropriately programmed computers could duplicate human thought processes is much more controversial.

Aside from the philosophical issues, Papert makes a very important point when he argues that it is dangerous to reassure people that machines will never be able to challenge the intellectual capabilities of human beings. If people are lulled into a sense of security about machine capabilities, they will be ill prepared to deal with situations in which machines become better than people at doing specific jobs, he says. Whether or not the machines are described as thinking in these situations, the social and psychological issues raised by machine capabilities demand attention.
(Enlazo a la página de portada de la sección de libros, en vez de la específica del fragmento que tenéis aquí. En cualquier caso, vale la pena leer la crítica completa… e incluso el libro, si tenéis la oporunidad)

Más cosas que no se inventaron ayer. Uno ve poco fútbol del de darle patadas a un balón, pero bastante fútbol americano, un deporte en que las retransmisiones no serían lo mismo sin la obligatoria skycam, ua cámara que sobrevuela el terreno de juego colagada de cuatro cables. Y sí, cumple cuarenta años:

Skycam: An Aerial Robotic Camera System

A microcomputer provides the control to add three-dimensional mobility to TV and motion picture cameras

On a morning in March 1983, a group of technicians gathered at Haverford High School in a suburb of Philadelphia. Each brought an electrical, mechanical, or software component for a revolutionary new camera system named Skycam (see photo 1). Skycam is a suspended, mobile, remote-controlled system designed to bring three-dimensional mobility to motion picture and television camera operation. (See the text box on page 128.) I used an Osborne 1 to develop Skycam's control program in my basement, and it took me eight months of evenings and weekends. As of 3 a.m. that morning, however, the main control loop refused to run. But 19 hours later, Skycam lurched around the field for about 15 minutes before quitting for good. Sitting up in the darkness of the press booth, hunched over the tiny 5-inch screen, 1 could see that the Osborne 1 was not fast enough to fly the Skycam smoothly.

In San Diego 18 months later, another group of technicians opened 20 matched shipping cases and began to get the Skycam ready for an NFL preseason game between the San Diego

Chargers and the San Francisco FortyNiners. The Skycam was now being run by an MC68000 microprocessor based Sage computer, and a host of other improvements had been made on the original. [Editor's note: The Sage Computer is now known as the Stride: however, the machine used by the author was purchased before the company's name change. For the purpose of the article, the machine will be referred to as the Sage.] For the next three hours, Skycam moved high over the field fascinating the fans in the stadium while giving the nationwide prime-time TV audience their first look at a new dimension in sports coverage.

Skycam represents an innovative use of microcomputers. The portable processing power needed to make Skycam fly was unavailable even five years ago. That power is the "invention" upon which the Skycam patents are based. It involves the support and free movement of an object in a large volume of space. The development team used the following experiment to test the movement and operation of the Skycam.

At a football field with one lighting tower at each of four corners, the team members bolted a pulley to the top of each pole, facing inward. Then they used four motorized winches, each with 500 feet of thin steel cable on a revolving drum and put one at the base of each tower.

Next, they ran a cable from each motor to the top of its tower and threaded the cable through the pulley. They pulled all four cables from the tops of the towers out to the middle of the field and attached the cables to a metal ring 2 feet in diameter weighing 10 pounds (see figure 1). A motor operator was stationed at each winch with a control box that enabled the operator to slowly reel in or let out the cable. Each motor operator reeled the cable until the ring was suspended a few feet from the ground, and then they were ready to demonstrate Skycam dynamics.

All four motor operators reeled in the cable. The ring moved upward quickly. If all four motors reel in at the same rate (and the layout of lighting towers is reasonably symmetrical) the ring will move straight up. In the experiment, the two motors on the left reeled in and the two on the right reeled out. The ring moved to the left and maintained its altitude. An instruction was given to the two motor operators on the left to reel out and the two on the right to reel in just a little bit. The ring moved right and descended as it moved back toward the center.

The theoretical basis of this demonstration is quite simple. For each point in the volume of space bounded by the field, the four towers and the plane of the pulleys, there is a unique set of four numbers that represents the distances between that point and each of the four pulley positions. Following the layout above for an arbitrary point on the field, you can...

Pero este mes me quedo con el tema de portada: el uso de simulaciones informáticas para modelar la sociedad:

Simulating Society

THE NEED FOR GREATER RIGOR in the social sciences has long been acknowledged. This month's theme examines computer-based simulation as a means to achieving that end. Simulation may be able to assist in evaluating hypotheses, not in the sense that an experiment in the physical sciences can test a hypothesis, but in the sense of making plain the ramifications of a hypothesis. The value of specifying a hypothesis with sufficient clarity to be amenable to programming and of examining the consequences of that hypothesis should not be underestimated. Indeed, one of the interesting aspects of the work presented here is that these researchers appear to be developing a tool for the social sciences that is not simply a poor stepchild of physical science methodologies.

Our first article, "Why Models Go Wrong" by Tom Houston, is a wonderfully readable account of the ways that you can misuse statistics.

Next, Wallace Larimore and Raman Mehra's "The Problem of Overfitting Data" discusses a difficult but important topic. Overfitting happens when your curve traces the noise as well as the information in your data. The result is that the predictive value of the curve actually deteriorates.

In "Testing Large-Scale Simulations," Otis Bryan and Michael Natrella show how validation (determining whether the specification for the simulation corresponds with reality) and verification (determining whether the simulation program corresponds with the specification) were achieved on a large-scale combat simulation they developed for the Air Force.

The ways of economic modeling are illustrated by Ross Miller and Alexander Kelso, who show how they analyzed the effects of proposed taxes for funding the EPA Superfund in "Analyzing Government Policies."

Michael Ward discusses his ongoing research in simulating the U.S.-Soviet arms race in "Simulating the Arms Race."

Several authors discuss new and surprising applications of simulation. In "EPIAID," Dr. Andrew Dean describes the development of computer-based aids for Centers for Disease Control field epidemiologists. Royer Cook explains how he fine-tuned a model in "Predicting Arson," and Bruce Dillenbeck, who uses an arson-prediction program in his work as a community activist, discusses modeling in "Fighting Fire with Technology"

Articles in other sections of the magazine that relate to this theme include Zaven Karian's review of GPSS/PC and Arthur Hansen's Programming Insight "Simulating the Normal Distribution."

When I began researching this theme, I took an excellent intensive course in simulation from Edward Russell of CACI. Dr. Russell's is the unseen hand guiding the development of this theme. Of course, any blame for bias in the choice of theme topics belongs to me, but much of the credit for the quality that is here must reside with him.

No os perdáis los artículos sobre los problemas, comenzando por los dos que abren la sección, sobre los riesgos del mal modelado (un tema que, desafortunadamente, tiene hoy todavía más importancia que hace cuarenta años), y siguiendo con el de modelado económico con Lotus 1-2-3, o el de epidemiología.

Ah, y aprovechando que la cosa iba de modelado… ¿sabíais que SPSS/PC+, no solo ya existía en 1985, sino que fue lanzado en 1968? Si a alguien se le ocurre un software que lleve más tiempo en el mercado, que avise.

Anuncio del programa SPSS/PC+. El eslogan es Make Stat Magic. Lo ilustra la foto de un sombrero de copa, como los de los magos, del que sale un disquete de 5¼ etiquetado SPSS/PC+

Y no vamos a dejar de hablar del Amiga, claro. Esta vez, es Bruce Webster, otro de los columnistas estrella de la revista, el que nos explica lo mucho que ha alucinado con la potencia, el precio y la elegancia del sistema:

According to Webster

Commodore's Coup

Product of the Month: Amiga

Last month, I made a few comments about the future of the home computer market, based on rumors I had heard about the Amiga from Commodore. In essence, I said that if what I had heard was true the Amiga might be the heir to the Apple II in the home/educational/small business marketplace.

Since writing that. 1 have seen the Amiga. I have watched demonstrations of its abilities; I have played with it myself; and I have gone through the technical manuals. My reaction: I want to lock myself in a room with one (or maybe two) and spend the next year or so discovering just what this machine is capable of. To put it another way: I was astonished. Hearing a description of a machine is one thing, seeing it in action is something else especially where the Amiga is concerned

I can tell you that the low-resolution mode is 320 by 200 pixels, with 32 colors available for each pixel (out of a selection of 4096). But that does not prepare you for just how stunning the colors are especially when they are properly designed and combined. It also doesn't tell you that you can redefine that set of 32 colors as the raster-scanning beam moves down the screen, letting you easily have several hundred colors on the screen simultaneously.

It also doesn't tell you how blindingly fast the graphics hardware is. If you've seen some of Commodore's television commercials demonstrating the Amiga's capabilities, or if you've looked at the machine yourself, you have some idea as to what the machine can do. If you haven't, I'm not sure I can adequately describe it.

Having seen the graphics on the Amiga, I have to smile when I hear people lump it together with the Atari 520ST. The high resolution mode on the ST is 640 by 400 pixels with 2 colors (out of 512); on the Amiga, it is 640 by 400 pixels with 16 colors (out of the 4096). and you can redefine those 16 colors as the raster-scanning beam goes down the screen. Also, the graphics hardware supporting all those colors is much faster. Little wonder, then, that a friend of mine, a game developer with several programs on the market, came back from the Amiga developers' seminar with plans to return the Atari ST development system at his house and to turn his attentions to the Amiga instead.

As I guessed last month, the real strength of the Amiga is its totally open architecture. An 86-pin bus comes out of one side of the machine, giving any add-on hardware complete control of the machine What's more 512 K bytes of the 68000's 16-megabyte address space have been set aside for expansion hardware, 4K bytes each for 128 devices. A carefully designed protocol tells hardware manufacturers what data they should store in ROM (read-only memory) so that the Amiga can automatically configure itself when booted. This is a far cry from the closed-box mentality of the Macintosh, which has forced many hardware vendors through weird contortions just to get their devices to talk consistently to the Mac without crashing.

The memory map is well thought out. The Amiga comes with 256K bytes of RAM (random-access read/write memory); an up...

Snif.

Si os lo leéis entero, por favor no os asustéis cuando lleguéis al momento en que comenta que la RAM está a 350 dólares (algo más de mil, actualizando la inflación) por 256 kilobytes. Vamos, que lo por lo que costaban 256 kilobytes hoy te puedes comprar unos 320.. gigabytes. Un millón a uno. (Y supongo que no os sorprenderá mucho comprobar que los márgenes de beneficio de Apple al vender RAM para sus sistemas no son una cosa del siglo XXI.)

Y lo dejamos aquí por este mes. Nos vemos el mes que viene, con el número de noviembre.

Jeff Minter y la historia de los videojuegos

A veces en el trabajo montamos cosas muy chulas… y luego a mí se me olvidan completamente. Hace ya una temporada que Joan Arnedo, entre otras cosas director del máster universitario en línea de Diseño y Programación de Videojuegos de la UOC, monta unas jornadas, RUMSXPLORA, dedicadas a conservar la memoria de los videojuegos de los 80. Para la edición del año pasado Joan buscaba ponente de lujo, y centrado en el mundo Commodore. Y discutiendo sobre el tema salió de mi boca el nombre del mítico Jeff Minter (si alguien saca un Llamasoft: The Jeff Minter Story y tu carrera en el mundillo arranca de Centipede para el ZX81, pasa por el Gridrunner para el Commodore 64 y el Attack of the Mutant Camels, sigue con cosas para el Amiga y el Atari, la Jaguar de Atari y llega hasta el día de hoy, eres una leyenda).

Y vaya usted a saber cómo, Joan engañó a Jeff para venirse a Barcelona y el resultado es esta charla. Y la charla, además, fue un fantástico repaso en primera persona a la historia y evolución de los videojuegos desde los muy primeros ochenta hasta hoy y, de regalo, contiene un recordatorio de por qué es importante conservar la memoria de aquellos programillas de cuando las memorias se medían en kilobytes.

(Estoy seguro de que en algún momento, relativamente a inicios del siglo XX, en que alguien propuso una filmoteca para conservar las primeras películas y que alguien le contestó que no había ningún valor en conservar una cosa tan burda como aquella. Es probable que si hubiésemos tenido un poco más vista, hoy conservaríamos más y mejores recuerdos de los inicios del cine. Uno, que tiene mucha fe, espera que con la historia de los videojuegos seamos más cuidadosos, pero la industria se lo trabaja todo lo que puede para quemar su propia historia (léase y léase).)

En fin, que no os perdáis la charla de Jeff Minter, por los dioses del Olimpo de los videojuegos…

(Ah, y si os aburrís, hay un episodio de un cierto podcast sobre la jornada que contiene diez minutillos de alguien hablando con la leyenda ☺️.)