Byte, feberero del 86

Seguimos con el proyecto mensual de ojear la revista Byte… con cuarenta años de retraso (tenéis todas las entradas sobre el tema, que ya son unas cuantas, en la etiqueta Byte de este blog). Y febrero del 86 se dedicaba… al procesado de textos (que, spoiler, no es lo mismo que los procesadores de texto).

Portada de la revista Byte de abril de 1986. El tema es el procesado de textos. La ilustración de portada es una placa de ordenador sobre la que flota la palabra TEXT

Y comenzamos mirando publicidad. El primer anuncio, diría yo, de un programita que seguimos usando cuarenta años más tarde: ¡Excel! Dice la Wikipedia que fue lanzado en septiembre del 85, y si vais a nuestra entrada del número de mayo del 85 (sí, llevamos ya un tiempín con esta historia de la revista Byte) encontraréis el anuncio de que lo iban a lanzar, y corregidme si me equivoco (ya podría ser, ya), pero no lo habíamos vuelto a ver por aquí.

Anuncio a doble página de Microsoft Excel. Vemos un ratón con un único botón y un diskette de tres pulgadas y media

Y si os ha llamado la atención el ratón monobotón, o el disquete de 3,5″… sí, Microsoft lanzó originalmente Excel solo para Mac.

No pongo captura, pero también merece la pena pararse en la sección de cartas (página 24 y siguientes), en que los lectores revisan el programa para calcular π (¡del número de mayo!) y explican lo lentísimo que es convergiendo (pero destacan que es muy legible y un buen ejemplo para aprender) y algunas correcciones al programa sobre la distribución normal (esta vez solo tenemos que retroceder hasta octubre). Bravo por los lectores atentos.

Seguimos, esta vez con nuestra manía de pararnos en cualquier cosa que tenga que ver con el Amiga. En este caso, se trata de una introducción al Kernel, el software de sistema contenido en su ROM, escrita nada más y nada menos que por su creador, el mítico (en círculos reducidos, cierto) RJ Mical. Si alguien quiere leer más sobre el tema, en el mismo Archive podéis encontrar su manual. #YaNoSeEscribeSoftwareAsí

Introduction to the Amiga ROM Kernel

A look inside the Amiga by the creator of Intuition

Editor's note: The first version of this article appeared on BIX (BYTE Information Exchange) on October 10, 1985.

This article introduces the building blocks of the Amiga ROM (read-only memory) Kernel software. I will examine the ROM Kernel including AmigaDOS and the disk-based libraries and devices, and present examples of translating code from other machines to the Amiga. Finally, I'll look at the hardware and special features of the ROM Kernel, describing how to use these directly in a system-integrated fashion. | Editor's note: For an overview of the Amiga from Commodore, see "The Amiga Personal Computer" by Gregg Williams, )on Edwards, and Phillip Robinson. August 1985 BYTE, page 83.)

System Overview

It is rare for software and hardware groups to work as closely together as we did at Amiga. We exchanged and debated ideas continuously during the creation of the Amiga. The close relationship influenced the design, bringing new features to the hardware and allowing the software to take full advantage of the hardware.

The Amiga's greatest strengths lie in its modularity and the interconnections among its system components, both hardware and software. The design teams designed and devel-

oped simultaneously and from the start they were intended to complement one another. Even though we designed the hardware pieces to fit tightly together, you can use any subset of the features without the necessity of controlling the entire machine. It's the same with the ROM software, where the pieces work closely together but each can stand alone.

The hardware and software combine efforts in many ways to achieve the Amiga's performance. For instance, the hardware includes a special coprocessor, the Copper, which synchronizes itself to the display position of the video beam without tying up the bus or the processor. The Copper can move data to one of the many hardware registers or it can cause a 68000 interrupt, which the Amiga's multitasking Exec (also known as Executive) then processes. This makes the Copper a powerful, unobtrusive auxiliary tool. It is used by the Graphics Support library for display-oriented changes and by the audio device for time-critical audio channel manipulations. You can use the Copper for time-critical operations because it's tied to the display, which is guaranteed to run at 60 Hz (the display processors start from the top of the screen 60 times a second).

The way the Amiga handles communications with its peripherals is another example of the union of hardware and software. The signals that pass between the Amiga and its peripherals are interrupt-driven. Peripherals, therefore, do not disturb the system or require monitoring until information needs to be communicated. The Amiga Exec works with the interrupt-driven communication by managing a complete interrupt-processing mechanism, providing a convenient, interleaved, prioritized processing of interrupts.

The multitasking Exec forms the core of the system software; it is a compact collection of routines that underlies the rest of the Amiga ROM software. The developers attempted to optimize the Exec for space, performance, clarity of usage, and the creation and management of lists, which are the primary components of Exec. All of the other pieces of the Exec are built on lists and, therefore, provide performance with a minimum of system overhead. You will be able to use even the more esoteric Exec functions once you learn the concept of the Exec list.

Exec is the starting point for all the other pieces of ROM software, mostly because it is the controller of tasks and interrupts. Each of the ROM , Kernel software components is designed to stand alone as much as possible; programmers can choose which components to use. But at the...

Y unas páginas más adelante nos encontramos un anuncio del Amiga que es un homenaje (merecidísimo) a Denise, Paula y Agnus, los tres chips especializados en vídeo, audio y gestión de memoria, revolucionarios para la época, que eran una de las partes vitales para hacer del Amiga la maravilla multimedia que era.

Anuncio del Amiga de Commodore. se muestran tres chips, y se presume de 4096 colores, sonido de cuatro canales estéreo, 32 instrumentos, 8 sprites, animaciíon en 3D, 25 canales DMA, un bit blitter y voces masculina y femenina

Y dejamos el Amiga (hasta que nos den la más mínima oportunidad de recuperar el tema 😅) y entramos en el tema del número, el procesado de textos. Hablando con la leyenda de la informática que es Donald Knuth (se lee Kanuz, por cierto), hoy profesor emérito de Stanford, creador de TeX y autor de la magna opus The Art of Computer Programming (in progress). Por aquella época ya hacía más de una década que le habían dado el premio Turing y en la entrevista, como no podría ser de otra forma dado el tema, hablan de tipografía digital y de la creación de Metafont, un software que se sigue usando hoy y que continúa siendo una [no tan] pequeña maravilla.

COMPUTER SCIENCE CONSIDERATIONS

CONDUCTED BY G. MICHAEL VOSE AND GREGG WILLIAMS

Donald Knuth speaks on his involvement with digital typography

Text processing as a computer science problem has consumed a major portion of the time and energy of Stanford professor Donald Knuth over the past eight years. Knuth authored and placed into the public domain a highly regarded typography system that he calls TeX {pronounced "tech"), along with a font creation language called METAFONT. \n conjunction with the completion of T^X, Knuth and Addison-Wesley are publishing a five-volume work entitled Computers and Typesetting. Volume I is The TeXbook, volume 2 is the source code for TeX, volume 3 is The METAFONT Book, volume 4 is the METAFONT source code, and volume 5 is Computer Modern Typefaces.

To discover what so intrigued Knuth about this subject. BYTE senior editors Gregg Williams and Mike Vose conducted the following interview with Professor Knuth at Addison-VJesley's offices in Reading, Massachusetts, on November II, 1985.

BYTE: Dr. Knuth. how did you become involved with digital typography and the publicdomain system known as Tj:X? Knuth: I got interested because I had written books and seen galley proofs, and suddenly computers were getting into the field of typesetting and the quality was going down.

Then I was working on a committee at Stanford planning an exam, and we got a hold of some drafts of Patrick Winston's book on artificial intelligence. We were looking at it to see if we should put it on the reading list for a comprehensive exam. It had just been brought in from Los Angeles where it had been done on a digital phototypesetter. This was the first time that I had ever seen digital type at high resolution. We had a cheap digital machine at Stanford that we thought of as a new toy. But never would I have associated it with printing a book that I'd be proud to own. Then I saw this type, and it looked as good as any I had ever seen done with metal. I knew that it was done just with zeroes and ones. I knew that it was bits. I could never, in my mind, ever, conceive of doing anything with lenses or with lead, metallurgy, and things like that. But zeroes and ones was different. I felt that I understood zeroes and ones as well as anybody! All it involved was getting the right zeroes and ones in place and I would have a machine that would do the books and solve all the quality problems. And, also, I could do it once and for all. I still had a few more volumes to write [of his seminal work. The Art of Computer Programming, a seven-volume series of which three volumes are finished] and

Y, para hacer más énfasis en lo que decía de que procesado de texto no se refiere a los procesadores de texto (al menos, no a los que nos vienen más rápidamente a la cabeza), nos podemos dar un chapuzón en cómo estaba por aquel entonces el estado del arte de la interpretación del lenguaje natural:

INTERPRETATION OF NATURAL LANGUAGE

by Jordan Pollack and David L Waltz

A potential application of parallelism

This article was adapted from "Parallel Interpretation of Natural language!' presented to the International Conference on Fifth Generation Computer Systems, November 1984.

THE INTERPRETATION of natural language requires the cooperative application of both language-specific knowledge about word use, word order, and phrase structure and realworld knowledge about typical situations, events, roles, contexts, and so on. While these areas of knowledge seem distinct, it isn't easy to write a program for natural-language processing that decomposes language into its parts; i.e., you cannot construct a psychologically realistic naturallanguage processor by merely conjoining various knowledge-specific processing modules serially or hierarchically.

We offer instead a model based on the integration of independent syntactic, semantic, and contextual knowledge sources via spreading activation and lateral inhibition links. Figure 1 shows part of the network that is activated with the sentence

John shot some bucks. (1)

Links with arrows are activating, while those with circles are inhibiting. Mutual inhibition links between two nodes allow only one of the nodes to remain active for any duration. (However, both nodes may be simultaneously inactive.) Mutual inhibition links are generally placed between nodes that represent mutually incompatible interpretations, while mutual activation links join compatible ones. If the context in which this sentence occurs has included a reference to "gambling." only the shaded nodes of figure la remain active after relaxation of the network. But if "hunting" has been primed, only the shaded nodes shown in figure lb will remain active. Notice that the "decision" made by the system integrates syntactic, semantic, and contextual knowledge: The fact that "some bucks" is a legal noun phrase is a factor in killing the readings of "bucks" as a verb; the fact that "hunting" is associated with both the "fire" meaning of "shot" and the "deer" meaning of "bucks" leads to the activation of the coalition of nodes shown in figure lb; and so on. At the same time, the knowledge base in our model is easy to add to or modify. In this model of processing, decisions are spread out over time, allowing various knowledge sources to be brought to bear on the elements of the interpretation process. This is a radical departure from cognitive models based on the convenient decision procedures provided by conventional programming languages.

Our program operates by dynamically constructing a graph with weighted nodes and links from a sentence while running an iterative operation that recomputes each node's activation level (or weight) based on a function of its current value and the inner product of its links---

(Como es costumbre de la casa, tanto Pollack como Waltz son no solo expertos, sino pioneros en la materia.)

Seguimos con el tema. Nos quejamos (con razón) de que artes y humanidades están excesivamente separadas en las cabezas de muchos, y de que esto es fuente de unos cuantos de nuestros problemas. En los ochenta ya era en gran parte así, no nos engañemos, pero de vez en cuando podíamos ver cosas como un artículo en una revista tecnológica dedicada al tema del procesado de… poesía.

POETRY PROCESSING

by Michael Newman

The concept of artistic freedom takes on new meaning when text processing handles the mundane tasks of prosody

For over a year, Michael Newman, Hillel Chiel (a researcher at Columbia Medical School), and Paul Holier (a programmer and analyst for PaineWebber) have been developing The Poetry Processor: Orpheus A-B-G The software is not yet commercially available, but we are pleased to share Michael Newman's thoughts on poetry processing and a module of Paul Holzer's code that shows off some of the new application's capabilities.

THE PROPERTIES OF a medium can have a decisive impact on the nature of what the medium conveys. Poetry began in an oral bardic tradition. It was newsy, folksy, evocative of the doings of great heroes. It had to be accessible to folk encountered at a roadside as well as pleasurable to more educated people met at court. There was no great emphasis on intricate forms, on how the poem looked on a page, because the page was not where the poem resided. The poem was voice-resident, ear-active. When Gutenberg invented movable type he did more than spring the Bible. His invention ultimately provided a watershed, an opportunity for the consolidation of language itself — and Shakespeare jumped on the opportunity. He reconfigured poetry, bringing together history, tragedy, and comedy under its roof. And, by casting poetry as theatre, he popularized it immensely.

Poetry in print became more permanent, less permutable; more visual, less aural. In this century, with the development of free verse, the poem has become almost a visual object, broken up and spread all over the page. There is even concrete poetry, which makes a fetish of typography.

Another world that makes a fetish of typography is software, specifically the largest part of software: word , processing. Software is about as permanent as print because you can always get a printout, but it is much more permutable. And, above all, it is interactive.

So what will be the impact of this revolutionary new medium on the oldest, most interactive, programmatic, musical, and image-provoking form of human speech? And what will be the impact of poetry on software?

Classical poetic forms— such as the sonnet, the villanelle, the sestina— are natural-language programs, algorithms. The sonnet is a set of instructions specifying 14 lines of iambic pentameter; a line of iambic pentameter contains five iambic units (feet). An iamb is a two-syllable unit with the accent on the second syllable.

Poetic algorithms have more in common with programming than their algorithmicness and use of powerful syntax. Poems involve iteration: Not only do iambs repeat and five-beat lines repeat, but ending-sounds repeat (rhyme in a sonnet), whole lines repeat (refrains and rhymes in a villanelle), words repeat (ending words in a sestina). Individual letters repeat in alliteration. This repetition is something poets count, and something poetry readers see and hear. If poets can count these things, so can a computer. If readers see and hear these things, so can the computer user— in an enhanced way.

Poems also involve two other cornerstones of computer science: recursion and conditionality. Every sonnet written refers to others of its kind. It...

No os perdáis, por favor, la discusión sobre cómo sacar la métrica de un poema automáticamente (en inglés, además, donde la cosa depende más de sílabas átonas y tónicas que en español):

Machine Reading of Metric Verse
by Paul Holzer

A computer can definitively scan a line of poetry for its stress pattern principally in one of two ways: (I) an algorithm can deduce the syllabic structure and the stressed syllables from analysis of the letters that make up the word, or (2) the computer can look up every word in a dictionary database that holds the syllabification and accentuation of every word. The lookup method requires a large database, and the algorithmic approach is complex and requires a deep analysis of English phonetics and spelling.

One of the features of a poetry processor is that the poet-user can specify the meter of every line of a poem (see photo A). For example, the string .-/.-/.-/.-/.-/ represents iambic pentameter. Dots (.) indicate an unstressed syllable and dashes (-) represent a stressed one. The slash (/) indicates the end of a foot, the basic metric unit. The first line of Shakespeare's Sonnet 18

shall I comPARE thee TO a SUMmer's DAY?

is an example of a line of iambic pentameter. The stressed syllables are in uppercase.

After writing a poem, users might request a metric scan of the poem. I will describe here a method fordoing this that is not based on one of the two general solutions I mentioned in the first paragraph. Instead, the processor will break each word into its syllables and then redisplay each line, with each syllable in uppercase or lowercase according to the position of the dots and dashes in a user-specified metric form. So. were Shakespeare trying to compose trochaic pentameter, with the metric pattern -./-./-./-./-./. the processor would reply with

SHALL i COMpare THEE to A sumMER'S day?

He would read this to himself, trying to put the stress on the uppercase syllables. Noting the rhythmic clumsiness, he might rewrite his line as follows:

To a summer's day I shall compare thee

and the processor would respond:

TO a SUMmer's DAY i SHALL comPARE thee.

Sounds better!

The main task for the computer is to break each word into its syllables. The algorithm is based on a systematic application of what appear to be the general rules by which English words break into syllables. Of course, there are no fixed rules, as evidenced by the fact that different dictionaries give different syllabifications for the same word.

The following is a simple version of the algorithm:

1. Break the word up into a sequence of alternating vowel and consonant groupings. Thus microcomputer becomes micro computer. Wherever there is a vowel or group of contiguous vowels, there will be a syllable. We need only assign the neighboring consonants to the syllable on the right or to the syllable on the left.

2. If the first vowel group has a consonant group to its left, then assimilate this consonant group to the vowel group. This leads, in our example, to microcomputer.

3. If the final vowel group has a consonant group to its right, then assimilate this consonant group to the vowel group. We now get microcomput er.

4. For the remaining unassigned consonants, do the following:

. a. If the consonant stands alone, attach it to the following vowel. Thus we get mi cr ocompu ter.

b. If there are two consonants, split them. We get mic ro com pu ter.

c. If there are three consonants, then i. If there is a doubled consonant, split the pair; thus apply becomes a ppl y and finally ap ply.

ii. If there is no doubled consonant, but the first of the three consonants is n, r, or [, then split between the second and third consonants.

iii. In all other cases, split between the first and second consonants.

Before applying this algorithm, however, we must preprocess the initial string of letters in order to take into account certain peculiarities of English orthography:

1. Final e is silent (with certain exceptions); treat it as a special consonant. Thus compute becomes compu te, then compute, and finally compute.

2. Translate many two-letter sequences into special single consonants, e.g.. sh, th, gu, qu. and ck.

3. Identify common suffixes. For example, the algorithm applied to blameless would yield blameless and then bla me less. However, when less is removed as a suffix, then the e in blame to thinking of the program as something for me to use— the relational table of contents was so the user could access my work. The program was originally to have been just a floppy solution to my table-of-contents dilemma. But you don't get that involved in a software application without elaborating and generalizing. In that way software is very much like'

poetic forms. You use it for the sake of using it. It generates its own kind of trance. Poetry and programming, once you look at them in context were just made for each other.

Marriages like this one, made in heaven, often are so because they are marriages of convenience. One of the impediments to formal verse writing is the inconvenience of having to

make repeated book accesses for rhymes, just when the form has prompted some involvement. You stop and look and lose something. That's one reason people have tried to do without forms. But that's throwing out the baby with the bathwater. You don't stop measuring and sounding things out, and you don't abandon would be recognized as silent, yielding blame less.

4. Identify some prefixes. For example, if en is recognized as a prefix, then enact becomes en act, rather than e nact.

It seems to be impossible to come up with a reasonably small set of rules and preprocessing steps to guarantee correct syllabification of all words. Two examples will illustrate some of the inherent difficulties:

1. Compound words: The algorithm will not detect the silent e in snake within the compound word snakebite unless the fragment bite is recognized as a word or treated as a suffix. Avoiding the problem would require either extensive word or prefix table lookups.

2. Successive vowels in different syllables: In reach, the ea is a single vowel sound, and the algorithm would treat it correctly. In react, we pronounce the e and a separately and the correct syllabification is react. Were the algorithm modified to isolate re as a prefix, it would treat react correctly, but turn reach into re ach.

Where ambiguities can arise, the best approach is to formulate a rule that leads to the smallest number of cases requiring table lookups for resolution. The present algorithm is not perfect, but it produces a readable, if not dictionary-perfect, syllabified word 95 percent of the time.

I have provided a Pascal program that implements the syllabification algorithm and illustrates how The Poetry Processor "reads" a user's poem according to a user-specified metric scheme. Editor's note: The Microsoft Pascal source code and executable version are available from BYTEnet Listings, telephone (617) 861-9764. as SCANPOEM.PAS and SCANPOEM.EXE. The executable version requires any MS-DOS or PC-DOS machine] To run the program, prepare two files. TESTPOE must contain the lines of poetry. You can write TEST.POE as a text file with each line of the poem on a separate line. A second text file. TESTFRM. should have a line containing a string of dots (.) and dashes (-) indicating the accentual scheme that each line of poetry is supposed to follow. Slashes indicating the end of a foot are optional.

As an example, a Shakespearean sonnet (iambic pentameter) will have a TESTFRM file consisting of 14 lines of .-/.-/.-/.-/.-/. Each line in TESTFRM must end with an asterisk. After editing the TESTFRM and TESTPOE files, you can run the program by entering its name, SCANPOEM. The computer will "read" the poem, printing in uppercase the appropriately stressed syllables.

Note that the program is a prototype version of the algorithm. It will not handle text with capital letters, apostrophes, or punctuation, so be careful not to include these features in TEST.POE. When using this demonstration program, you will undoubtedly find that some words are not properly syllabified.

Pero el colmo del friquismo, en serio, es un artículo entero dedicado a la sesudísima (solo hago un poco de broma, aquí) cuestión de si vale la pena aprender a teclear en un teclado Dvorak (#TLDR, los autores opinan que sí, si te puedes permitir el lujo de escribir siempre en un teclado Dvorak). Que el primer firmante de la pieza sea profesor emérito… de física, dedicado a la astronomía forense, es solo la guinda del pastel.

¿Había dicho yo que volveríamos al tema Amiga a la que nos dieran una oportunidad? Sí, ¿verdad? Aquí, los orígenes británicos de AmigaDOS:

Tripos—The Roots of AmigaDOS

Metacomco is the British company behind AmigaDOS

by Dick Pountain

A question that must be puzzling many people in U.S. computer circles is, "What is Metacomco?" When Commodore announced its spectacular Amiga computer, much of the U.S. press failed to point out (and possibly did not know) that the advanced operating system AmigaDOS was in fact written by a small British software house called Metacomco. (For more information on the Amiga, see "The Amiga Personal Computer" by Gregg Williams, Jon Edwards, and Phillip Robinson, August 1985 BYTE, page 83.)

Metacomco is based in Bristol, England, a city that is beginning to rival Cambridge as our potential computing capital (it also houses TDI-Pinnacle, INMOS, and others). Metacomco was founded in 1981 by Derek Budge and Bill Meakin and now employs ' about 2 5 people, mainly programmers and other technical staff.

The company's first product was a portable BASIC interpreter written in BCPL, the forerunner of C, which is taught and used extensively at Cambridge University. This interpreter was ported to the 8086 processor and shortly afterward was sold to Digital Research Inc., which still markets its descendant as Personal BASIC. This U.S. link became very important to Metacomco, for the royalties provided a steady source of income during the crucial early years and helped the company establish an office in California, which kept Metacomco in touch with the U.S. computer scene.

In 1983 Dr. Tim King, a Cambridge computer scientist, was engaged by the company as a consultant, and Metacomco's emphasis switched to the 68000 processor, with which King had been working since the first samples came out in 1981. The company produced a series of development tools, also written in BCPL, including a fullscreen editor, a macro assembler, and a linking loader. At that time there was no clearly established standard operating system for the 68000, so the next step was to write one. Subsequently, Tripos was born.

The Tripos operating system was based on a multitasking kernel developed as a doctoral thesis project at Cambridge in 1976. ("Tripos" was the name given to the three-legged stools that students sat on in the old days when taking their examinations and has since become the colloquial name for the Cambridge final examinations.) King, then working at Bath University, took the kernel written for a DEC PDP-11 and made it into a full 3 2 -bit multitasking operating system for the Sage microcomputer (which was new at that time). Tripos is BCPLrbased in the same way that UNIX is C-based, and it has many innovative features that I will discuss.

Metacomco had also purchased the rights to Cambridge LISP, a powerful LISP interpreter/compiler originally developed for the IBM. 3 70 and then ported to the 68000 at Cambridge. Metacomco produced versions for the ill-fated CP/M 68K and then for Tripos. Reduce 3, a symbolic math system written in LISP, was added to produce a Sage-based workstation that was sold to research institutions in various countries. Customers included SORD in Japan and Bristol neighbor INMOS, who used BCPL, for the first stage of bootstrapping its Occam compiler onto the 68000, using Sage computers running Tripos.

In 1984. Tim King joined Metacomco fulltime as Research Director, and Sinclair Research launched the QL. Initially the QL lacked a serious software-development environment, and Metacomco was able to quickly port its development tools, including the BCPL compiler, to it. The company has since extended the range to include an ISO (International Organization for Standardization)-validated Pascal computer, and it markets these products directly, rather than via the manufacturer, largely by mail order.

November 1984 is the crucial date in the AmigaDOS story. Metacomco visited Amiga...

Y aún una página más con contenido Amiga, aunque aquí no sea el contenido lo que quiero destacar, sino el continente. Estamos en 1986, y el mundo comienza a conectarse digitalmente. Byte, de hecho, tiene su propio servicio online, BIX (el Byte Information Exchange), que se había puesto en marcha en junio (a seis dólares de la época la hora de conexión)… pero la audiencia era tan corta (dice la Wikipedia que en el 87 llegaron a 17,000 usuarios) que la revista le daba bombo al servicio destacando un «Best of BIX» en sus páginas. Igual sí hemos cambiado un poco, en estos cuarenta años…

Best of BIX

AMIGA

Commodore's introduction of the Amiga has produced a flurry of activity among professional developers and personal computer users within the Amiga conference. The summary this month includes discussion on cables, monitors, printers, and software fixes. One of the hottest topics in the Amiga conference is on the subject of improving the performance of the Amiga by removing the 68000 and replacing it with a 68010 or 68020.

68010/68020 Upgrade

amiga/amiga68000 #22

An Amiga conference member asked if he could just drop a 68010 into the 68000 socket. This would give a 10 to 80 percent boost in performance! He had one, just sitting up to its bottom in black foam, on the shelf. But there were all these warnings about what would happen to his warranty if he opened the case.

amiga/amiga68000 #26, from rickross [Richard Ross, Eidetic Imaging]

M68010 works! A 68010 plugs directly into the Amiga and no problems were detected in the operation of the system software. Also, for everyone like me who has been trying to judge from the BYTE review photos, the microprocessor is socketed. The performance increase gained by the switch is not phenomenal, and no benchmarks are available, but it did run perceptibly faster. The M68020 has also been tried and seems to work as well.

amiga/amiga68000 #32

A BIX user provides the following:

The company that markets the 68020 piggyback board is Computer System Associates Inc., 7564 Trade St., San Diego, CA 92121, (619) 566-3911. The prices are:

Board only $ 575
Board plus 68020 975
Board plus 68020 and 68881 1480

For more information, contact Patricia Chouinard at the address above. I believe that 68000/68010 supervisor code that handles exceptions and certain other privileged functions will have to be modified. User code should work as is.

amiga/tech.talk #39

An Amiga owner describes his adventure in opening his computer and replacing the CPU:

You just got your Amiga and it's already the slow boy on the block, right? You can plug a 68010 into an Amiga (there goes my warranty) and it does go faster My Sieve benchmark is down to 5.8 seconds from 6.1.

Note: Your warranty will most likely be dead after you do this. Also, there is a lot of RFI shielding inside the Amiga. You get to undo a lot of screws, bend a couple of tabs, and pray a lot. If you aren't a tech type, don't even think about doing this yourself. The 68000 is socketed, but it is partially under the micro-disk drive, so you have to lift it from one end and kind of levitate out the other end (use of your CHI helps). Also, you only take out the screws in the deep wells on the bottom (five in all). Then there are four places where the top grabs the base at the four corners (there were already marks on mine from where it was put together, I guess). Once you have the top off there is a big surprise waiting for you... Another big surprise is that big RFI shield. Yes, it is a $#%+& to get off! There are screws on three sides and two tabs of metal to untwist. Once the shielding is out of the way, your first sight is of the WCS [writable control store] daughterboard. The custom chips and two parallel I/O chips are made with MOS technology.

The CPU is made by Motorola. The main board looks pretty much like the BYTE review photos. The boot ROMs are 27256s! This gives a 32K-byte by 16- bit boot ROM! What are you guys hiding in there? I could put a BASIC interpreter in that much space!

If you attempt to change your CPU, don't blame me if you muff it! If you don't know about how to make yourself static-free, you could really buy yourself some trouble of the worst kind.

Compatibility: I've run all of the Workbench demos. Everything seems fine, but I'm not making any promises. . .

amiga/tech.talk #41

The adventurous Amiga owner says that yes, his Amiga boots up, squeaks and everything! All the software he has runs and works great. The only potential problem at this point is how many times the MOVE SR.dest op code is used. This is the only active op-code difference. There is a whole host of new goodies, though, some that make a . desire for an MC68881 easier to satisfy.

amiga/tech.talk #43: a comment to 39

Another BIX subscriber replied that the upgrade produced only a 5 percent increase in throughput. Perhaps fortunate, because the descriptions of the hardware here have indicated that bus bandwidth consumption by the 68000 is low enough to allow other custom DMA chips to steal enough cycles to get their work done. It would appear that inserting a 68020 in the socket would require faster bimmers, etc.

amiga/tech.talk #44: a comment to 43

Wouldn't think just putting in a 68020 would affect DMA. Same clock speed. Or does the '20 do something different cycle-wise?

amiga/tech.talk #45: a comment to 44

The author of message 43 replied that the 68020 at the same clock speed will finish an instruction or series of instructions internal to the CPU in less time and start requesting the bus for some ROM or RAM access. He assumed that the DMA chips hold a higher bus priority, so the result will be that the 68020 will often be sitting there in idle awaiting the BUSACK signal. Waste of a 68020. Perhaps that explains why there is only a 5 percent 68010 edge over the 68000.

amiga/tech.talk #46: a comment to 45

Somebody said that the 68000 only uses every other clock cycle (for memory access, that is). The DMA hardware is fast enough to do four accesses during every clock cycle. Most of the DMA accesses the bus during periods when the 68000 doesn't. If the 68020 doesn't have these quiet periods then there could be problems.

amiga/tech.talk #47: a comment to 46

Actually, there is a counterargument to that, which is that the 68020, but not the 68010, has an instruction-only cache, which would mean...

Antes de cerrar la sección, quiero aprovechar para recoger el obituario de Robert Tinney en Ars Technica. ¿Quién es Robert Tinney? El ilustrador de muchas de las portadas de los números de Byte que hemos recogido por aquí, que falleció este primero de febrero. Que su obituario aparezca en Ars da una idea tanto de la relevancia de la revista como del impacto visual del trabajo de Tinney en muchísima gente. Curiosamente, estamos muy cerca de llegar a los números en que la revista dejó de emplear a Tinney para pasar a usar fotos en sus portadas, como podéis comprobar en los archivos de la revista Byte en archive.org, que también podéis usar, si queréis, para avanzaros y comprobar de qué va el número «del mes que viene». Añado que Tinney tenía una tienda, todavía activa (y espero que lo siga estando mucho tiempo), y que ahora mismo estoy peleando muy fuerte conmigo mismo para no comprarme pósters del número de artes digitales de 1982, la de abril del 85, o la de «claves de la educación» de, nada más y nada menos que julio de 1980.


Y seguimos también con el repaso a los episodios de marzo del 86 de Computer Chronicles

El primero de los episodios se dedica a operar en bolsa por ordenador, algo novedoso en la época. No me ha resultado especialmente interesante, más allá de los cacharritos para recibir información financiera vía radio FM, tanto en forma de cacharrito independiente como de accesorio para tu PC.

El segundo programa del mes va de «software psicológico», desde software para ayudar con determinadas terapias (con la sofisticación de la época, más cercana al programita con el que se juega para renovar el carnet de conducir) a tests de tipos diversos, con sus, inevitablemente, «módulos de inteligencia artificial»… y las mismas preocupaciones y las mismas salidas por la tangente que nos suenan tanto hoy.

(Y en los breves, noticias de la crisis de Commodore, que le debía doscientos millones de dólares a los bancos. La compañía no acabaría muriendo hasta el 94, pero ya comenzaba a oler a chamusquina la cosa.)

El tercer programa del mes se dedicaba al software para astronomía, tanto profesional como amateur (en este último caso, bastante reconocible para cualquiera que haya usado una app de astronomía únicamente… pero cuatro órdenes de magnitud menos potente e interfaces jurásicas). La discusión sobre astronomía «profesional»… lo de siempre: gente alucinando con lo que había avanzado la tecnología en el campo… que ahora nos parece casi de juguete.

(Y en los breves, la muerte de la mítica Osborne… cincuenta y tres millones de dólares de pérdidas de Commodore, por si los doscientos millones de deuda fuesen poca cosa… y la compra de Pixar por Steve Jobs por «varios millones de dólares».)

El 3×22, dedicado al color, lamentablemente, parece que está desaparecido. Como de costumbre, podéis chafardear lo que se viene en marzo tanto en la lista de episodios de la Wikipedia como en la playlist a la que pertenecen los vídeos de YouTube que tenéis aquí arriba.

Y con esto cerramos el mes. Dentro de unas semanas, más.

Analíticas de Schrödinger

(Título robado a uno de los ¿3? ¿4? ¿podría ser que 5? lectores habituales de este blog, por cierto. ¡Hola, Isma!)

Medir visitantes a un sitio web nunca ha sido cuestión baladí. Pero, de verdad, estamos llegando a niveles «cuánto mide la costa de Gran Bretaña» (si no os suena la frase, seguid el enlace, seguidlo).

En obm hace unos meses nos fuimos de Google Analytics. De salida, instalamos Koko Analytics y, desde hace algo menos, pusimos Jetpack y, como Jetpack viene con su pack de analítica, pues lo dejamos activado. En el panel de WordPress, las dos gráficas de visitantes quedan, curiosamente, una al lado de la otra… y cada vez que entro al panel es un despiporre:

Dos gráficas de barras de páginas vistas en este blog. A la izquierda, la de Koko Analytics, que para los últimos siete días marca 51, 52, 51, 35, 31, 32 y 13. A la derecha, la de Jetpack, que para los mismos días da 49, 139, 124, 95, 334, 291 y 13.
Captura hecho por la mañana, de ahí que el último día salga tan bajo

No, no hay por dónde pillarlo, efectivamente. De hecho, lo sorprendente es que, en el momento que hice la captura, el marcador del último día coincidía en 13 visitantes y 13 páginas vistas. Al cabo de un rato seguía habiendo coincidencia en páginas vistas (17) pero uno opinaba que de 17 visitantes diferentes, y el otro que de 14.

Y claro, uno se puede ir a buscar las estadísticas del hosting, y…

Nueva gráfica de tráfico. Marca peticiones totales y peticiones únicas. Las peticiones totales comienzan en unas 5000 hace una semana, bajan a unas 3500 los cuatro siguientes días, suben a casi seis mil, y están muy bajas para el último día. Las peticiones únicas se mantienen un poco por encima de mil, excepto el sexto día, en que se va a 2000, y el séptimo, en que está muy baja.

Abro paréntesis: ¿no os llama la atención la pestañita de «bots»? A mí sí. ¿Veis aquí arriba como el hosting cuenta algo menos de veinticinco mil peticiones de visitantes?

La gráfica muestra unas 166000 peticiones en total, 30000 de IA, 12000 de motores de búsqueda, 0 sociales y 123000 de otro tipo.

Apenas 6 peticiones de bots (casi 7, de hecho), por cada petición «humana». Todo bien.

En fin. En cualquier caso, que habría que recuperar la tira de xkcd de los estándares (siempre hay que recuperar una tira de xkcd),

An kkcd comic strip. The headline is How standards proliferate (see a/c chargers, character encodings, instant messaging, etc). There are three fames. First frame is textual: Situation, there are 14 competing standards. In the second frames, two stick figures talk to each other. First figure says "14?! Ridiculous! We need to develop one universal standard that covers everyone's use cases". Second figure answers Yeah. Final frame is text. It says "Soon: Situation: There are 15 competeing standards"

¿Alguien se anima a hacer un nuevo motor de analítica web?


PS Sí, la entrada es porque los números de vistas estos días (increíbles como son) son mucho más altos de lo que era habitual por estos pagos en los últimos meses. Que no sé qué miden (no sé si miden algo, de hecho), pero ya se sabe que, ande o no ande, caballo grande.

Lo de las redes sociales y los menores de 16 (y los tecnobrós)

…o César se mete en un charco al que no le habían llamado.

Si algo tuvo más eco el día en que Pedro Sánchez anunció que se apuntaba a lo de prohibir las redes sociales a los menores de dieciséis, fue la respuesta de Pavel Durov y la contrarréplica de Sánchez. Es por eso que, lo primero es dejar claro que nada más lejos de mi intención que defender a Durov —véase Arrest and indictment of Pavel Durov en la Wikipedia (y, en particular, la sección Background) sobre lo de agosto del 24, por ejemplo, para entender que no es una figura que valga la pena defender— ni a ningún otro tecnobró con red social y patrimonio de 8 cifras en adelante. (Y de las opiniones de meloncete mejor ni hablar, claro.)

Y también debo decir que tampoco estoy en contra de regular el acceso a determinados servicios y contenidos en función de la edad y, en particular, el acceso a redes sociales (y en especial a las grandes redes sociales) para las personas menores de dieciséis. He leído en algún sitio que igual lo que habría que prohibir es el acceso a esas redes a hombres blancos mayores de una cierta edad (incluyéndome a mí), y el argumento tiene bastantes atractivos, pero sigue siendo una restricción por edad.

(Podría ponerme puñetero y buscar casos en que esa regulación tiene efectos negativos, pero no es el objetivo del ejercicio. Lo único que tengo claro de todo esto es que por nada del mundo querría tener nada que ver con la decisión de regular algo así. Ni a punta de pistola.)

Queda, desde luego, definir qué es una red social. Está claro que la medida se dirige a Instagram y TikTok, especialmente. Pero… si un videojuego tiene algún tipo de mecanismo de comunicación entre jugadores (y cuál no lo tiene)… ¿es una red social? Porque me da a mí que (i) lo son, en la práctica, y (ii) si Instagram y TikTok son nocivos para mucha de la gente que los usa, los canales de comunicación de muchos de esos videojuegos lo son al menos igual. ¿Es el correo electrónico una red social? ¿Los SMS del teléfono smart-o-no de niños y niñas a partir de los doce?

Si este blog tiene comentarios (que funcionan como funcionan, lo sé 🙏) y en alguna ocasión (con muy muy poca frecuencia, pero alguna vez ha pasado) se establece comunicación entre quienes dejan comentarios… Entre el primer paréntesis y el segundo, tenéis razón: no pasaría nada por quitar los comentarios de obm, cierto. Pero… ¿vamos a tener que establecer limitaciones de edad en los comentarios de todos los blogs del mundo? Vaya, que, si opero un blog (fuera de WordPress.com y similares) ¿voy a tener que comprobar la edad de cualquiera que publique un comentario? (En obm, me repito, no es problema: se cierran los comentarios y aquí paz y después gloria, pero…) ¿Y las secciones de comentarios de los diarios en línea? (De estas, ciertamente, se puede afirmar que sería una victoria para la sociedad que desaparecieran, tenéis razón.)

También tiene que decirse que, sin haber visto la propuesta de norma y, sobre todo, cómo se pretende hacer la implementación en la práctica, pero después de haber visto algunos ejemplos de cómo se están implantando las normas de restricción de acceso por edad (sobre todo a contenidos para adultos, o, lo que es básicamente lo mismo, pornografía) que van apareciendo por todo el planeta, esas implementaciones me provocan, cuando menos, dudas. La capacidad del estado español de hacer leyes muy bonitas en la teoría, pero cuya aplicación práctica deja bastante que desear, diría yo, ha sido ampliamente demostrada. Y eso que en España tenemos un DNI obligatorio, y eso, al menos en principio, debería facilitar las cosas.

Así que, más allá de definiciones, la principal duda (o mi principal duda, al menos), es sobre esos mecanismos que voy a tener que utilizar para demostrar mi mayoría de edad en aplicaciones como WhatsApp, Telegram y Signal, o en redes sociales como Instagram (sí, querida lectora, me temo que debo confesar que tengo cuenta en Instagram (y Bluesky, y en Mastodon y, si me apuras, en last.fm, que también es una red social)), y cómo me van a garantizar la privacidad de esos datos. En Europa se supone que eso va a hacerse a través de eIDAS 2… pero aún no lo tenemos implementado, no se tendrá hasta finales de este año (y, por todos los dioses, que no corran con el desarrollo, que nos va la privacidad de todas en ello)… y yo, antes de probar un arma de potencial destrucción masiva de la privacidad, preferiría que hicieran unos cuantos meses de pruebas con fuego real antes de verme obligado a usarla.

Adiós a los servidores de Mastodon operados por particulares… Lo que podemos asegurar es que comprobar la edad de las personas que usan un servicio va a ser otro servicio… y va a requerir unos recursos. Seguro que habrá empresas (grandes consultoras, por ejemplo) que se están frotando las manos por ofrecernos esos servicios… a un módico precio. Si hay que pagar por proteger a un colectivo amenazado, se paga, desde luego. Pero me da a mí que esas comunidades virtuales que se sostienen gracias a la buena voluntad de sus operadores y operadoras… van a ver cómo se tensiona aún más esa buena voluntad. Y los que se lo puedan permitir, lo harán pasando por la caja de empresas que no nos caen nada bien.

¿No debería regularse el acceso a redes sociales, pues? Me repito: no estoy en contra de regular el acceso a determinados servicios y contenidos en función de la edad y, en particular, el acceso a redes sociales para las personas menores de dieciséis. Pero, si vamos a hacerlo, o incluso si vamos a aplaudir la medida, como mínimo podríamos intentar informarnos antes de cómo se va a hacer, inventariar los potenciales efectos secundarios que va a tener hacerlo (como los tienen todas las normativas, y la ausencia de normativas) e incluirlo todo en la discusión. Si hay que comprar, se compra. Pero sabiendo el precio.

Sandisk Extreme Fit: mide antes de comprar

(Hoy toca batallita especialmente intrascendente, quedáis avisados.)

Dichoso primer ingeniero el que decidió que no tenía sentido poderle cambiar el disco a un portátil. Desde entonces, todos acabamos haciendo, tarde o temprano, encaje de bolillos. Porque, por enorme que te parezca un disco, dentro de dos años ya no va a ser enorme, y dentro de tres será demasiado pequeño. En Sandisk, que tienen vista, anuncian sus USB Extreme Fit (aquí la página) con esta imagen:

Vamos, el USB ideal para dejarlo conectado al portátil, y se acabó (durante una temporada) tener que, o bien preocuparse por el espacio en el portátil, o bien conectar y desconectar continuamente un «pen». Y para allá que me fui, a por mi USB de 512 gigas (ir por ir…). Y todo bien, hasta que lo he enchufado…

Mono, ¿eh? Ahora… ¿veis que deja un espacio tirando a justo para el puerto USB-C que queda justo al lado? Pues bien: para enchufar el portátil a los hubs que tenemos en la oficina, cuyos cables no son lo más ajustados del mundo, me he pasado un rato hasta que he encontrado uno que había perdido su carcasa, porque si no, no entraba de ninguna de las maneras:

Ni un milímetro de margen, oiga.

Hay cables USB-C que no dan ningún problema (el de mi «ladrón» USB, sin ir más lejos, afortunadamente). Pero. Antes de comprar un Sandisk Extreme Fit, si vuestro portátil tiene los puertos USB-C uno al lado del otro, echadle un vistazo al resto de cosas que queráis conectar antes de comprar.

Byte, enero del 86

Lo de siempre: seguimos con nuestro proyecto de leernos la revista Byte… con cuarenta años de retraso, y esta vez con un añadido final extra. El tema del mes… ¡la robótica! (Tema que vamos a ignorar bastante completamente, porque no me pone nada. Pero las portadas de Byte son un clásico, o sea que aquí va la del mes:

Portada de la revista Byte de enero de 1986. El tema de portada es la robótica. La ilustración es un huevo, que rompe desde dentro un brazo robótico, como si fuera un polluelo al nacer

Comencemos, pues, por la editorial:

A Threat to Future Software

Last October Digital Research Inc. yielded to pressure from Apple and agreed to change its GEM software to decrease its resemblance to Apple Macintosh software. (GEM is an operating environment for several MS-DOS- and PC-DOS-based computers that allows a user to interact with a computer via windows and icons rather than the usual text-only commands.) Let's ignore, for the moment, the uncertain worth of a "visual copyright" (the legal term for Apple's copyrighting of the overall "look" of Macintosh software). Let's also ignore the ethics of Apple's actions. The point to focus on, instead, is that Apple's actions are to no one's benefit: Both the microcomputer industry and Apple itself will suffer from their effects.

Apple's actions will slow the growth of the microcomputer industry, which will hurt Apple by shrinking the potential microcomputer audience. Already, several small companies are worried that some project they're working on (and, often, they with it) will be cut down because it is "too Mac-like." In addition, the success of Apple's tactics may encourage other companies to try similar actions, thus increasing the paralysis and anxiety in the industry.

These actions will stifle the incremental evolution that is at the root of any significant growth in our industry. By "incremental evolution" I mean the process of gradual improvement of a product type that eventually leads to a more robust, useful product. For example, Ashtonlate's Framework did not spring full-blown from the heads of the programming team at Forefront. It had its roots in Dan Bricklin's and Bob Franston's VisiCalc spreadsheet, Sorcim's Supercalc (which added functions and sold to a market not supported by VisiCalc), Mitch Kapor's VisiPlot (which gave the distinctive highlighted menu bar now used in so many programs), the software integration of Lotus 1-2-3, and the icons, windows, and pulldown menus of— well, you get the point. If companies are afraid to go to market with what they think are incremental— but distinct— improvements on a basic design, we will become a stagnant industry bounded by the usual and comfortable.

According to Irving Rappaport. Apple's associate general counsel, Apple's intent is to prevent other companies from creating products that are easy to use because of their similarity to the Macintosh. "If people look at it and say, 'Gee. that's like the Mac— I can operate that,' when that's the result you get, it's over the line" of infringement of Apple's copyrights. The effect of this intent is to fragment the industry in the face of what was becoming a de facto standard for human-computer interaction. This lack of standardization will cause many people to stay uninterested in computers because they will have to relearn basic skills with each brand of computer they encounter. (Imagine how many people would drive cars if car manufacturers used different controls for every function in the car.)

Apple might argue that, by claiming a larger slice of a smaller pie, it will still come out ahead. We believe that it will be hurt directly by its actions and will end up with a smaller piece of a pie that is itself smaller. Apple will, in effect, build a wall around its ghetto of Macintosh products, thus limiting its own growth and encouraging people to "live" elsewhere.

Texas Instruments' TI-99/4A provides a good example. TI announced that it intended to directly profit from all software written for its machine by forcing third-party software developers to publish their products through TI. When a brave few brought out 99/4 cartridges on their own. TI added a proprietary chip to their cartridges that the computer required before it would run the enclosed software. Needless to say, the few developers working on 99/4 software wisely turned to support other computers.

The same may happen to Apple. IBM already sells over half the business computers bought today, and IBM PC-compatibles account for a fairly large slice of what's left. If Apple has been slowing the erosion of its market share to IBM with the Macintosh line (and I think it has), its current moves will alienate software and hardware developers, who will begin to lavish their creativity upon the more congenial IBM PC-compatible marketplace. And where innovation goes, the market will follow.

Consider: IBM made its software and hardware architectures open. It allowed the development of innumerable hardware clones, many far more similar to IBM products than GEM is to the Macintosh desktop; consequently, the IBM PC-compatible market far outdistanced its combined competitors in less than two years. On the other hand, Apple is actively discouraging not only copying but also borrowing from its software design. It claims the sole right to benefit from a set of ideas that Apple itself has borrowed and improved on (the most direct borrowing was from work done at Xerox PARC). Given these two opposing directions, what do you think will happen?

A Call to Action

We at BYTE call on Apple to recognize the long-term implications of its actions and limit itself to prosecuting cases where the alleged theft is not of "looks" but of actual program code. Barring that, we call on Apple to license its allegedly copyrightable interface to markets that do not directly compete with its current or planned product line— if the licensing fees are reasonable, everyone will profit.

If neither of these things happen, we call on the judicial system to hand down rulings that reflect a strict interpretation of the visual copyright laws— that is. that a product is at fault only if it shows no distinguishing characteristics in appearance or operation from the alleged original; this would protect products that show incremental evolution. We also call on the industry to do two things. The first is to stand up to Apple and see the case decided on its legal merits. The second is to develop an alternative graphic interface and allow its wide adoption throughout the non-Apple computer community; in this way. the rest of us can get on with the business of making computers— in general— good enough that everyone will want to use them.

[Editor's note: Apple maintains that the agreement covers "only three specific products," but one of them is GEM Desktop, which defines the overall GEM environment. Also, according to Kathleen Dixon of Apple, the agreement includes any custom work DRI has done, including the modified GEM software that Atari uses in its 520ST computer] ■ —Gregg Williams, Senior Technical Editor

¿Creíais que Apple se quejaba solo de que Microsoft la copia? (Todo sea dicho: a lo largo de la historia Microsoft ha copiado cosas de Apple… y hasta hay casos en los que Apple ha copiado de Microsoft. Y donde dice Microsoft, puede decirse Google/Android.) Pues antes de quejarse de Microsoft y Windows, se quejaron de GEM, la capa gráfica de Digital Research para sistemas PC/MS-DOS (y no solo estos: volvemos sobre el tema más abajo). Respetando la propiedad intelectual de Apple (más que el editor de Byte, después de leerle), comparto con él que con estas cosas, entonces y ahora, el consumidor sale perdiendo bastante.

Seguimos con los «microbytes» la sección de noticias breves. En esta ocasión, por un lado, evolucionamos con algo que ya habíamos visto por aquí… a las pantallas planas LCD les llega el color:

Epson, Toshiba Announce Color LCDs

Toshiba has developed an active-matrix, eight-color, 640- by 480-pixel, 10-inch-diagonal liquid-crystal display (LCD) that nearly matches the brightness of a standard color TV. No pricing or availability information was given.

Epson announced a backlit high-contrast, 5.13-inch-diagonal color LCD with a resolution of 480 by 440 pixels (one-third of which are red, green, or blue). Epson says the display's contrast ratio is more than 10 times that of a standard reflective LCD and has a viewing angle greater than 60 degrees. Epson also unveiled a high-contrast, 9-inch-diagonal monochrome LCD with a resolution of 640 by 400 pixels. Samples of both displays will be available during the first half of 1986; prices should be approximately twice as much as standard reflective LCDs.

Epson also announced two 10-inch-diagonal monochrome displays using ferroelectric smectic-C crystals. The 640- by 400-pixel and 640- by 200-pixel displays are said to have high contrast ratios, low power consumption, and moderate cost; samples may be available late this year.

Y por el otro (literalmente, hay que girar la página para llegar a ello), desmontamos un poco el mito de que Kodak murió por no innovar en fotografía digital:

Kodak Proposes Tiny Magnetic Disk for Photographs

Eastman Kodak, Rochester. NY, has lined up more than 30 companies— including Sony, Hitachi, and Fuji— to support its 47-mm (1.85-inch) floppy disk for storage of electronic still images. The 800K-byte disk can store up to 50 images of 240-line NTSC video. Eventually, the disk is intended for use in cameras; for now, Kodak is working on a 35-mm film-to-disk transfer station for use in developing labs and a still-video player/recorder for the disks.

…y es que pocas compañías investigaron e invirtieron en el campo de la fotografía digital como Kodak, que acumuló una inmensa bolsa de patentes sobre el tema. Lo que mató a Kodak (bastantes años después de 1986) fue, sobre todo, el miedo a canibalizar su mercado «químico».

Nos vamos, ahora, a la publicidad:

Anuncio del modem Hayes Smartmodem 2400

Sí, amigas, 1986 es el año de volar a 2400 baudios, no a los «viejos» 1200. Casi dos kilobits y medio, sí. ¿Recordáis la tortura que es tener cobertura «solo» 4G y descargar cosas a pocos megabits? (Pero no os emocionéis: no todas las líneas telefónicas de la época soportaban esa barbaridad de velocidad.)

Y seguimos mirando anuncios, con un momento histórico: ¡el primer anuncio que vemos de Windows!

Gran texto, Introducing Powe Windows. Vemos una pantalla de ordenador con quizás 8 colores y cuatro ventanas, que no se solapan, sino que se muestran una al lado de la otra. También vemos un ratón y un disquet de 5 ¼ con la etiqueta Microsoft Windows.

No os pongo el publirreportaje entero (8 páginas tenía en total, que Microsoft ya tenía unos dineros en la época), pero sí os dejo aquí esta maravilla de gráficos:

Doble página con una gran imagen de una captura de pantalla con hasta cinco ventanas mostradas en pantalla, de nuevo sin solaparse. Vemos la aplicación de relog, una ventana con un primitivo explorador de archivos, un "filing assistant" y una gráfica de barras en riguroso blanco y negro.

¿Reconocéis vuestro Windows «de toda la vida»? Yo tampoco.

Hablábamos antes de GEM… y lo recuperamos aquí, porque en este número se analizaba el Atari ST, la tercera de las máquinas con procesador Motorola 68000, después del Macintosh y el Amiga (recordemos siempre: Amiga mejor que ST mejor que Mac). Y el sistema operativo del ST era, efectivamente, el GEM de Digital Research (bueno, GEM era, como con los PCs, la capa gráfica sobre TOS, el verdadero sistema operativo).

The Atari 520ST

The 68000 unbounded

Editor's note: The following is a BYTE product description. It is not a review— for several reasons. Some of the equipment we received, such as the hard-disk drive, were prototypes, and at the time of this writing, software is scarce. Atari has not yet completed its BASIC interpreter, and the operating system. TOS, remains unfinished. Nonetheless, we are as intensely interested as our readership in new technology, and we feel we have learned enough to share some of the results of our investigations. We began our work on this description as soon as we were able to get a system from Atari. A full review will follow in a subsequent issue.

For many years the public has equated the Atari name with arcade games and joysticks. In truth, the Atari 400/800/XL computer line is technically at least comparable if not better than other 8-bit machines, so it should not be a surprise that the company's latest venture, the 520ST (see photo 1), is a competitive 68000 system. Indeed, we are most impressed with the clarity of the graphics, with the speed of the disk I/O (input/output), and with the 520ST's value.

The system is not without its problems. The desktop is less effective than the Macintosh's, the keyboard has an awkward feel, and the current operating system makes it impossible to switch between high-resolution monochrome and low- or mediumresolution color without installing the other monitor and rebooting. Nonetheless, we are left with a very favorable impression; several software-development languages are already available, including FORTH, Modula-2, and C. With them, you can tap the power of the 68000 at a most reasonable price.

System Description

The Atari 520ST is a keyboard computer. Like the Commodore 64 and the Atari 400/800, the 520ST keyboard unit contains the microprocessor, the memory, the video and sound circuitry, and so on. The power supply disk drives, and monitor are external devices. The 520ST has a variety of ports, but there are no internal expansion slots.

The In Brief box on page 90 summarizes the features of the Atari 520ST. For $799, you get the CPU, a 12-inch diagonal monochrome monitor, and one external single-sided double-density floppy-disk drive. For $999, you get the same system with a 12-inch RGB analog monitor in place of the monochrome monitor (see photo I). Both systems provide 51 2 K bytes of RAM (random-access read/ write memory), a Motorola 68000 microprocessor, MIDI ports with a transfer rate of 31,2 50 bps (bits per second), a DMA (direct memory access) port with a transfer rate of 10 megabits per second for a hard disk or CD-ROM (compact-disk read-only memory), and much, much more. To be sure, owners will make some sacrifices. The unit does not have an RF (radio frequency) modulator for television output, every peripheral has a separate power supply (wire haters beware), and the operating system

currently rests in RAM, stealing over 200K bytes from your workspace. We have summarized other problems below, but almost all are insignificant when you consider what you do get for the money. And rest assured, the system works. Our first system, like most of the first production units, had to have several chips reseated. It now functions properly, and we have not heard of any similar quality-control problems on the latest 520STs.

The Hardware Design

The heart of the 520ST is the MC68000, with its 1 6-bit data bus and 24-bit address bus, running at 8 MHz (see figure 1). The rest of the system was designed to stay out of the 68000's way. (See the 520ST motherboard in photo 2.)

The Atari design team began work on the 520ST in May 1984. From the start, they had several specific goals in mind. The first was to choose a fast microprocessor and do everything to let it run effectively at full speed. To the Atari team, that meant maximizing bus bandwidth and relegating as...

Y… ¿vamos a comparar GEM con Windows, tal y como lo presentaba la mismísima Microsoft en su campaña publicitaria?

Dos fotos de pantallas con GEM a media resolución en una y a alta en la otra. La presentación es muchísimo más sofisticada que la de Windows que hemos visto antes, con ventanas que se solapan y los menús del sistema.

(Eso sí: reconoceremos que el parecido con el sistema operativo de los Macintosh es más que notable. Es innegable.)

Seguimos con nuestra sección «esto no lo ponemos en una revista hoy, que nos lapidan» con un programa en BASIC para dibujar superficies 3D:

EASY 3-D GRAPHICS

BY Henning Mittelbach

A BASIC program for plotting 3-D surfaces

AFTER READING "Budget 3-D Graphics" by Tom Clune (March 1985 BYTE, page 240), I decided to develop a low-cost program for three-dimensional graphics on small computers. 

The program is based upon the formulas for an axonometric projection
in relation to the origin, as shown:

XB = X*COS(PHI) - Y*COS(PSI) 
YB = X*SIN(PHI) - Y*SIN(PSI) + Z

Depending on the graphic window of the computer used, you may change these formulas to

XB = XO + X*COS(PHI) -Y*COS(PSI)

YB = YO - X*SIN(PHI) - Y*SIN(PSI) - Z

where XO and YO will represent the origin of the axes, as shown in figure 1. (I developed the program on an Apple II, with XO = 110 and YO = 180.) Also in figure 1, (XB.YB) is the point to be plotted, and PHI and PSI are the angles referring to the horizon. The function Z = F(X,Y), in line 200 of the program, needs a scaling factor F (line 210) that the user has to introduce in the program.

The Program

The program starts at lines 100 to 180 where you set the parameters X0, Y0, ...

Ojo, que el programa tenía una cierta complejidad y hasta ocultaba las superficies ocultas:

Gráficas de las funciones seno por coseno, exponencial del seno de x por y) y equis por y.

(Si esto no os fuera suficiente, os podéis ir a la página 397 para ver cómo implementar el algoritmo de Euclides para calcular el máximo común divisor.)

He dicho que me iba a saltar la robótica, pero sí me quedo con uno de los artículos de la sección:

MACHINE VISION

by Phil Dunbar

An examination of what's new in vision hardware

THE POTENTIAL APPLICATIONS of machine vision are many and obvious. Everything from quality assurance to robotic navigation could benefit from the availability of reliable vision systems for computers. Perhaps less obvious, though, is the variety of problems that hamper development of the technology. These problems appear on all levels of machine vision— hardware, low-level analysis, and high-level AI (artificial intelligence) manipulation of low-level data. This article will discuss problems that plague the development of vision-system hardware and indicate some of the technology that has emerged to address these problems.

You might think that the most difficult hardware problem in vision systems is digitizing the high-frequency analog stream of camera data. In fact, that is not so. Currently, machine vision algorithms use gray-scale (i.e., monochrome intensity) video information almost exclusively. Such information can be adequately extracted from an analog signal by a 6-bit or 8-bit A/D (analog to digital) converter. Real-time conversion requires approximately a 10-MHz conversion rate to digitize a 512- by 512-pixel image.

These rates can be achieved with flash converters, pioneered by the TRW company when it introduced the TDC 1007 in 1977. Flash converters employ (2")-l comparators to perform A/-bit conversions. That is, an 8-bit flash comparator requires 25 5 comparators to operate. Since all possible digitized values can be compared to the signal at once, the throughput is much greater than with successive approximation methods. Of course, the complexity of the converter rises exponentially with linear increases in resolution. Notable among the commercially available flash converters is TRW's 8-bit monolithic chip flash converter (TDC 1048) that can operate at speeds necessary for real-time machine vision applications and costs about $140 per unit. The real problems with vision hardware revolve around the cameras. The problems fall into two basic categories: video signal standards and limitations of particular camera hardware technologies.

Television Standards

Much of robotics suffers from a lack of standards. Machine vision, on the other hand, suffers from the existence

of video signal standards that are not appropriate for our needs. Those standards were created by and for the television industry. Since the entertainment industry is still a far more lucrative market for camera manufacturers than machine vision, few image sensors and cameras deviate from television standards.

The monochrome video signal standard used in the United States, Japan/ and most of the Western Hemisphere is RS-170, a subset of the NTSC (National Television Systems Committee) standard. Europe uses the international CCIR (Consultative Committee, International Radio) standard, which is similar to, but not compatible with, RS-170. Since both standards present essentially the same problems to machine vision applications, I will limit my remarks to the RS-170 standard.

The RS-170 standard defines the composite video and synchronizing signal that your television uses (see figure 1). The image is transmitted one line at a time from top to bottom of...

Y después de la visión venía una pieza dedicada a los sensores táctiles, otra sobre navegación autónoma y una sobre IA en visión por ordenador. De nuevo, uno no sabe si estamos en el 86 o en el 26 (y no se siente con ánimos de explicar a los autores que a la cosa aún le quedaban unas pocas décadas).

Y echamos una última mirada a la publi, y es que creo que no habíamos reflejado por aquí la maravillosa campaña «Charlot» de IBM:

Anuncio a doble página. A la izquierda leemos que el PC ha llvado el rendimiento a una nueva altura. Al la derecha vemos a Charlot sentado  sobre una pila kilomética de documentos de todo tipo, trabajando con un PC de IBM.

Que no fue un único anuncio, os lo aseguro: años, duró la campaña, siempre visualmente maravillosa. Os dejo aquí un recopilatorio de anuncios televisivos.

Y nos vamos a ir con otro momento histórico:

The Acorn RISC Machine

A commercial RISC processor
by Dick Pountain

Acorn Computers Ltd. is one of the U.K.'s most successful computer companies, but like many others, it had its share of financial problems during the depressed year of 1985. Set up in 1 979 by two Sinclair alumni, Chris Curry and Hermann Hauser, the Cambridge-based firm (4a Market Hill, Cambridge CB2 3NJ. England) started out manufacturing a set of modular single-board controllers based on the MOS Technology 6502 processor. These small boards stacked together to make up complete industrial-control systems. The following year the Acorn people launched the Atom personal computer, a packaged but expandable machine that arose out of their experience with 6502 systems. For a while, at around £200, the Atom was the cheapest hobby computer available here, and it attracted a strong following, particularly among those who are as handy with the soldering iron as with the assembler. Hopped-up Atoms can still be found to this day.

Acorn's next product, initially called the Proton, was designed to meet a very advanced—for the time— specification published by the BBC (British Broadcasting Company), which was requesting bids to supply a personal computer around which an educational television series would be produced. Acorn won the contract, after a strong and often acrimonious contest in which Sinclair Research, whose 48K-byte color Spectrum was already on the market, lost out.

After a frustratingly long delay due to quality-control problems with the ULAs (uncommitted logic arrays), the BBC computer was launched and proceeded to corner the market in schools and universities. Acorn became a very wealthy company, with a turnover reputed to be £100,000,000 per annum at its high point.

The BBC Micro (alias the Beeb) is still quite a deluxe machine, with better highresolution color graphics than any of its competitors, and quite a bit faster, thanks to its 2-megahertz 6502. Another plus is the provision of a 10-MHz bus, called the Ttibe, to which second processors can be attached. Acorn charges a lot of money for this sophistication though, and the Beeb has kept its £400 price long after competitors have slashed theirs to below the £200 mark.

Acorn had from the start paid more attention to software than most manufacturers, recruiting the brightest Cambridge University computer science graduates for its software division. As a result, the Beeb acquired a range of languages unrivaled by any machine but the Apple II, including an advanced structured BASIC, LISP, Logo, FORTH, Pascal, BCPL (Basic Combined Programming Language), and more. But despite all these positive points, the Beeb has a major drawback, a shortage of memory. The ambitious specification, combined with the limited addressing capabilities of the 6502, left it with a maximum of 32K bytes of workspace (only this year upgraded to 64K bytes), and in the higher-resolution graphics modes this can be reduced to a mere 8K bytes. That doesn't get you very far in LISP or Logo.

So at the height of its prosperity Acorn set a team to design, in secret, its own processor to replace the 6502. This may seem like an ambitious, even rash, undertaking, but the people on the Acorn team were so wedded to the simplicity and speed of the 6502 architecture that they found it hard to countenance any of the commercially available 16-bit replacements. The BBC operating system is heavily interrupt-driven, and the sluggish interrupt latency of 16-bit chips, such as the Intel 8086 and Motorola 68000, would have meant introducing DMA (direct memory access) hardware and all sorts of other undesirable complications. Acorn did, in fact, adopt the National Semiconductor 32016 as a second processor for the Beeb, but only after first offering a 3-MHz 6502. And so they conceived the idea for the...

Acorn RISC Machine… A, R, M. La arquitectura del chip de tu móvil. O de tu Mac, si tienes uno. Y ahí estáis, viendo, en riguroso directo, su nacimiento. Casi nada.

Y hasta aquí la Byte del mes. Si queréis hacer los deberes para el mes que viene, como siempre, aquí tenéis los archivos de la revista Byte en archive.org.


Y esto habría sido todo… pero el otro día me enteré de la muerte de Stewart Cheifet (hasta el New York Times le dedicó un obituario). ¿Que quién es Stewart Cheifet? No me digáis que no habéis visto nunca su Computer Chronicles. Si Byte es, al menos para mí, uno de los recursos imprescindibles en formato prensa escrita para revisar la historia de la informática, Computer Chronicles es lo mismo, pero en formato vídeo. Los archivos del programa de la PBS, la tele pública de Estados Unidos (lamentablemente en peligro de muerte, gracias a la administración Trump y su alergia a la información de calidad), son un documento esencial si te interesa el periodo de 1983 a 2000. Y como homenaje, y como estas entradas sobre Byte <ironía>no son lo suficientemente largas</ironía>, he pensado que completarlas con el visionado de los programas correspondientes sería, cuando menos, un ejercicio curioso1. Y os dejo aquí los programas de enero del 86…

El 7 de enero el programa arrancaba con… ¡inteligencia artificial!

(¿No os ha encantado el anuncio del patrocinio de Byte? 😅)

No podemos dejar de comentar el copresentador del programa con Cheifet: nada más y nada menos que el malogrado Gary Kildall, creador de CP/M… y de GEM. Hay múltiples universos paralelos al nuestro en que amamos y odiamos a Kildall, CP/M y GEM y no recordamos quién era Bill Gates ni sabemos nada de un sistema operativo llamado Windows.

El Jerrold Kaplan que sale en la primera entrevista, por cierto, trabajaba por aquel entonces con Mitch Kapor, fundó en 1987 Go, dedicada a lo que luego se llamarían PDAs y luego fundaría el primer sitio web de subastas (cinco meses antes de eBay). Not bad. Y también podemos destacar la presencia del filósofo Hubert Dreyfus dudando fuertemente de la expertez de los sistemas expertos de la época :-).

Maravilloso también que los expertos apuntaban que 1986 podría ser el año del reconocimiento del habla 😅.

Después, el día 14, otro tema del que no se habla nada en la actualidad: seguridad informática.

…aunque en aquel momento esto se refería al uso de ordenadores para perseguir delitos, peleándose con catálogos de huellas digitales o usando sistemas de información geográficos, por ejemplo, pero también digitalizando procesos como en cualquier otra organización.

Os recomendaría, eso sí, saltar al minuto 27:30 del vídeo, en el que Cheifet habla de los gráficos de la peli El Secreto de la Pirámide… creados por un «nuevo ordenador gráfico, creado por Industrial Light & Magic, una división de LucasFilm. El ordenador se llama… «Pixar».

Y no sigo porque, según esta lista de episodios en la Wikipedia, el siguiente no se emitiría hasta febrero.

Apa, el mes que viene más (ya decidiremos si solo con Byte o con el añadido de Computer Chronicles).

  1. Un ejercicio curioso que, inevitablemente, no se me ha ocurrido solo a mí: veo que alguien ha montado un computerchronicles.blog y que ya lleva nada menos que los primeros 133 programas revisitados. ↩︎