The Quantum Spy

Title:                      The Quantum Spy

Author:                   David Ignatius

Ignatius, David (2018). The Quantum Spy: a thriller. New York: W. W. Norton & Company

LCCN:    2017015373

PS3559.G54 Q36 2018

Date Posted:      December 1, 2017

Reviewed by Marisha Pessl[1]

A similar widespread villainy lies at the heart of David Ignatius’s The Quantum Spy, a somber espionage procedural about the race to build the world’s first quantum computer—a theoretical frontier at the intersection of computer science and quantum physics. Ignatius is a Washington Post columnist who has long covered the C.I.A., and he happily takes us for a jaunt through a world of anonymous hotel rooms and conference tables across Beijing and Vancouver and Dubai, where decisions to take someone off “the shelf” (i.e., bring him or her back into action) are blankly relayed and executed. American start-ups on the brink of game-changing innovation are visited by a C.I.A. officer, a “lean, putty-faced man with a bad haircut” who quietly demands for the United States government to be their only client. Operatives aspire to the “highest art” of their profession: to “appear ordinary.”

Here, the ostensible enemy is a mole inside the C.I.A. known as RUKOU, or the DOORWAY, whom the C.I.A. must ferret out and eliminate, all the while keeping the Chinese away from their technological breakthroughs—a Sisyphean exercise if ever there was one.

The mood is mournful and restrained. The C.I.A.’s vibe feels like a highway motel with thin walls, a smell of chlorine, a vending machine where your Twix gets stuck on the glass. The most delightful aspect of the book is the characterization of the Chinese—their expletive-ridden insults, downbeat perspective (“Bad luck is always hiding inside the doorway, down the next hutong”), and quirks. Chinese agents carry a mijian with them at all times, “a small, leatherbound diary” in which they write things “that were never, ever to be shared.” In one fascinating scene set in Mexico, a Chinese agent with a Spanish accent unnerves the Chinese-American hero, Harris Chang, by unveiling Chang’s own secret political Chinese ancestry to him. It proves to be a surprisingly powerful interrogation technique: “He was uncomfortable. It was as if someone else had taken possession of his life story.”

It comes to light that the mole is motivated by a desire to build “one world”—a single borderless country that brings to mind Facebook’s hope to “bring the world closer together.” But infinitely more devastating than any double agent is the operating hollowness at the heart of the C.I.A. When superiors question Chang’s loyalty, he submits to three polygraphs; however no lie detector can resolve the problem. Neither innocent nor guilty, he is afflicted by a lack of resolve: “He occupied a space where things are ambiguous, where people are simultaneously friend and foe, loyal and disloyal, impossible to define until the moment when events intervene and force each particle, each heart, to one side or the other.” The agent is a spinning electron in the atom, eluding capture by a Heisenberg uncertainty principle. There is the probability of an exact location, which holds true only during the nanosecond of perception. Then he is at large again, careening around a moral fog.

[1] Marisha Pessl, “Our Villains, Ourselves: A Thriller Roundup,” The New York Times Book Review. Marisha Pessl is the author of the novels Night Film and Special Topics in Calamity Physics. Her next book, Neverworld Wake, will be published in 2018. A version of this article appears in print on October 29, 2017, on Page BR16 of the Sunday Book Review with the headline: Thrillers.

The Universal Computer

Title:                      The Universal Computer

Author:                   Martin Davis

Davis, M. (2000). The Universal Computer: The Road from Leibniz to Turing, New York: Norton.

Davis, Martin(2000, 2012). The Universal Computer: The Road From Leibniz To Turing. New York: Norton

LOC:       00040200

QA76.17 .D38 2000

Date Posted:      May 2, 2013

A review by Reviewed by Brian E. Blank[1].

If you teach a course on number theory nowadays, chances are it will generate more interest among computer science majors than among mathematics majors. Many will care little about integers that can be expressed as the sum of two squares. They will prefer instead to learn how Alice can send a message to Bob without fear of eavesdropper Eve deciphering it. No doubt they would be surprised to see the theory of numbers described as a “purely theoretical science without practical applications” or, even more bluntly, as “useless”. Yet, those are exactly the assessments of number theory that were given by Uspensky and Heaslet in 1939 and by Hardy in 1940. It is with a sense of irony that we read these pronounce- ments now, knowing that the seeds of their contradiction had already been sown. Work that would lead to the modern digital computer was already under way.

The great theoretical advance that led to the modern computer may be traced to 1936 when Alan Turing formulated a highly original concept that would eventually be called the Turing machine. At the time, projects to build simpler computing devices were just about to begin. Between 1936 and 1939 the German engineer Konrad Zuse designed and constructed two experimental electro-mechanical digital computers, the Z1 and Z2. In 1937 Howard Aiken submitted to IBM a formal proposal titled Proposed Automatic Calculating Machine. The product of Aiken’s initiative, the Harvard Mark I (also known as the IBM Automatic Sequence Controlled Calculator) was placed in service in the spring of 1944. It is considered the first electro-mechanical number-crunching computer. Mechanical it certainly was. The 750,000 moving parts of Aiken’s machine are said to have produced a roar like that of a textile mill. Less than two years later, in February 1946, a computer known as the ENIAC was fully operational. This 30-ton behemoth, conceived and constructed by John Presper Eckert and John William Mauchly, is considered to be the first electronic computer. Electronic it certainly was. When the ENIAC went online, its 17,468 vacuum tubes are said to have dimmed lights throughout Philadelphia.

The Mark I and the ENIAC were both funded by the military for the purpose of doing numerical calculations vital to the war effort. With the conclusion of the war, seminumerical commercial applications such as accounting, scheduling, record-keeping, and billing were developed. As the computer rapidly evolved from its eponymous function, the list of tasks assigned to it swelled. Even tasks that do not involve a single computation have been taken over by the computer. Nowadays a book review, for example, is likely to be solicited by computer communication, composed, researched, spell-checked, and typeset on a computer, submitted by computer, and posted for access by a worldwide network of computers. Even the “printer’s proofs” might arrive in the form of a computer file.

The conversion of number theory from a “useless” pursuit to an applied science has been due in large part to an especially ironic consequence of the computer’s evolution: in order that we may securely rely on the computer for such noncomputational tasks as commerce, communication, and archiving, we must first enlist the theory of numbers to foil the computational power of the computer to decrypt. Like the sea change in number theory that it occasioned, the metamorphosis of the computer from number cruncher to all-purpose logic machine has been a profound transformation that is now taken for granted but was not originally transparent. Aiken, for example, did not recognize the transition in progress. “If it should turn out,” he wrote in 1956, “that the basic logics of a machine designed for the numerical solution of differential equations coincide with the logics of a machine intended to make bills for a department store, I would regard this as the most amazing coincidence I have ever encountered.”

Although the electronic digital computer is barely more than half a century old, its history has attracted a devoted following. For more than twenty years a scholarly journal, the Annals of the History of Computing, has chronicled the development of computing in minute detail. A steady stream of books—some erudite, some popular—has allowed engineers, historians, and journalists to delve into nearly every facet of the computer revolution. Martin Davis’s new book, The Universal Computer: The Road from Leibniz to Turing, is not like any of them.

Davis’s perspective is unique: he is concerned with the development of the computer as an engine of logic rather than as an instrument of calculation. As he explains in his introduction, “A computing machine is really a logic machine. Its circuits embody the distilled insights of a remarkable collection of logicians, developed over centuries. Nowadays, as computer technology advances with such breathtaking rapidity, as we admire the truly remarkable accomplishments of the engineers, it is all too easy to overlook the logicians whose ideas made it all possible. This book tells their story.”

One cannot imagine an author more qualified than Martin Davis for such an endeavor. Many Notices readers will be familiar with Davis from his contributions, both in research and exposition, to Hilbert’s tenth problem. Others will know him from his excellent text-books, which have become standard references of theoretical computer science. Those who keep track of awards will recognize him as the recipient of the Chauvenet, the Lester R. Ford, and the Leroy P. Steele Prizes. In addition to his credentials as distinguished logician and honored expositor, Davis is also a pioneer programmer who wrote code for both the Institute for Advanced Study computer, a historic machine that has been in the collection of the Smithsonian Institution since 1960, and for one of its clones, an Army Ordnance “johnniac-class” computer known as the ORDVAC. His engaging autobiographical sketch offers a rare glimpse of the programmer’s craft as it existed in 1951, when the state of the art amounted to five kilobytes of random access memory tenuously implemented as static charge on the surfaces of cathode-ray tubes.

In The Universal Computer Davis begins his tale with Leibniz, whose proposal for an algebra of logic is the point of departure on the road to the universal Turing machine. It is indicative of the enthusiasm with which Davis infuses his writing that where others see “fragmentary anticipations of modern logic”, Davis perceives “a vision of amazing scope and grandeur.” As Davis tells the story, Leibniz “dreamt of an encyclopedic compilation, of a universal artificial mathematical language in which each facet of knowledge could be expressed, of calculational rules which would reveal all the logical interrelationships among these propositions. Finally, he dreamed of machines capable of carrying out calculations, freeing the mind for creative thought.” The chapter is called “Leibniz’s Dream”, and that dream is a sort of North Star toward which the axis of each subsequent chapter points.

Following the style of “Leibniz’s Dream”, Davis devotes each of the next six chapters to the life and contributions of a leading logician: the list comprises Boole, Frege, Cantor, Hilbert, Gödel, and Turing. In making these choices, Davis has taken great care not to stray from the road to Turing. Logicians such as Brouwer and Russell are discussed in a fitting amount of detail, but De Morgan, Peano, and Skolem are mentioned only in passing, while Peirce, Schröder, Löwenheim, and Zermelo are not mentioned at all. So coherent is the narrative, however, that one has the illusion that one is reading the entire history of mathematical logic without any discontinuity in its evolution.

Through the first seven chapters the principal logical concepts of each protagonist are presented at a level that is appropriate for a general audience. It was a shrewd idea to embed these discussions inside capsule biographies of the logicians. This stratagem serves both to lighten the load of the reader who has no prior training in mathematical logic and to maintain the interest of the more experienced reader who is already familiar with the logical theories. It is true that standard biographies exist, and, with few exceptions, Davis does not go beyond them. Nevertheless, most readers will welcome his lively, informal synopses, replete as they are with amusing anecdotage. Perhaps the best of these involves Davis himself. Driving in Princeton with his wife, Virginia, he happened to pass the town’s most famous denizen, dressed like a tramp, walking with Gödel, nattily attired in suit and tie, briefcase in hand. “Einstein and his lawyer,” quipped Virginia. Naturally Gödel and Turing provide ample grist for the raconteur’s mill, but the fact is, every one of the featured logicians, the dusty Victorian pedant George Boole included, makes for a fascinating character study.

By the end of the seventh chapter, Davis’s readers will have learned about Boole’s algebra of logic, Frege’s Begriffsschrift , the Continuum Hypothesis, Gödel’s theorem on undecidable propositions, Hilbert’s Entscheidungsproblem , and Turing machines. At this point the timeline of the narrative has reached the end of World War II: all the developments in logic that are needed for the universal computer are in place, and their physical realizations are literally on the drawing board. In keeping with the chronology, Davis interrupts Turing’s biography to direct his attention to the engineers who would take the next steps toward the fulfillment of Leibniz’s dream. He begins his eighth chapter, “Making the First Universal Computers”, with thumbnail summaries of the contributions of the hardware pioneers Aiken, Atanasoff, Eckert, and Mauchly. It may be argued that these sketches are too brief, but in fact these hardware implementations fall outside the scope of Davis’s book. That said, I do find it surprising that Davis accords only one paragraph to Claude Shannon, whose 1938 master’s thesis in electrical engineering showed how to apply Boole’s algebra of logic to electronic switching circuits. The complete omission of Konrad Zuse is even more puzzling. In any event, the early history of computing is well provided for.

The historian Tom Settle has used the death of Galileo to illustrate how elusive historical truth can be. Despite an authentic death certificate that cites the evening of January 8, 1642, calendrical variation renders uncertain which one of four days is actually being specified. It is tempting to believe that more recent events must prove less troublesome. Indeed, the authors of a new book about computer scientists assert that “in most sciences the seminal thinkers lived in the remote past. To uncover what they did…we must scavenge in the historical record, picking among scraps of information, trying to separate facts from mythology. Computer science is different.” Regrettably, this plausible claim is not true. Above all, priority for one of the indispensable principles of modern computing, the stored program concept, has proved to be hopelessly and bitterly controversial.

In a nutshell, John von Neumann, who worked with Eckert and Mauchly, has often been given full credit for the stored program concept because he advanced the idea in a widely circulated report that he released under his name alone. Later both Eckert and Mauchly disputed the importance of von Neumann’s contribution. Their position is argued eloquently in a recent book about the ENIAC. Although Davis admits that the question of von Neumann’s personal contribution “will probably never fully be resolved,” he seems to come down squarely on von Neumann’s side. His analysis is interesting, but in the big picture this acrimonious squabble lacks significance. For one thing, Zuse has a real claim to priority: he unmistakably proposed the stored program concept as early as 1936 (but did not pursue it, since it would have been of little use on his slow, mechanical memory machines). More importantly, the issue is something of a red herring. Davis himself first advanced this point of view in a 1987 article that may be regarded as a skeleton of the book under review. “What was really revolutionary about these machines,” Davis points out, “was their universal all-purpose character, while the stored program aspect was only a means to an end.”

That Turing had nailed the future of computing before all the others may be seen from several of his statements, of which the following from 1945 is typical: “There will positively be no internal alterations to be made even if we wish suddenly to switch from calculating the energy levels of the neon atom to the enumeration of groups of order 720.” In 1948 he put it this way: “We do not need to have an infinity of different machines doing different jobs. A single one will suffice.” Turing did not refer to this single machine by the misnomer that others with narrower visions were already using: he called it the universal machine, and, as Davis compellingly demonstrates, it was Turing’s conception of the universal machine that influenced von Neumann.

When a distinguished expert offers a popular exposition of his subject, we greet the effort with keen anticipation. That is all the more true when the writer is as skilled as Martin Davis. It is a pleasure to report that in this case our anticipation is richly rewarded. Not only does Davis captivate us with a fascinating story, he caps it with a moral as well. I have echoed this moral at the beginning of this review, but it is worth repeating in the author’s own words: “This book underscores the power of ideas and the futility of predicting where they will lead.” Seldom has this point been made so well. Read this book and enjoy.

[1] Brian E. Blank is professor of mathematics at Washington University, St. Louis, Missouri. His e-mail address is This review appeared in NOTICES OF THE AMS, 48, 5, pp. 498-501. The online version contains a number of references not repeated here.

The Essential Turing

Title:                      The Essential Turing

Author:                   B. Jack Copeland

Copeland, B. J.(2004),ed. The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life, Plus The Secrets of Enigma. New York: Oxford University Press.

LOC:       2004275594

QA7 .T772 2004

Date Posted:      April 28, 2013

The following review is extracted from one by Amnon H. Eden.[1]

Time Magazine’s “The Century’s Greatest Minds” rated Turing up there with Einstein. In The Essential Turing, Professor B. Jack Copeland, the Director of the Turing Archive for the History of Computing and a renowned Turing scholar, delivers a sophisticated, compelling, and philosophically competent account for the role that Alan Mathison Turing has been playing in the information revolution.

There is little debate on Turing’s contribution to the foundations of computer science. In 1936, the twenty-four years old Turing wrote “On Computable Numbers”, an article which at once laid the elements of computing and shaped the central and the most influential paradigm of the 20th century. Turing’s theorem[2] outlines the precise limits of mechanical and electronic computers. Although since 1936 generations of programming languages, transistors, printed circuits, and microprocessors came and went, and the computing power of digital computing machines has improved in at least four orders of magnitude, Turing’s abstract computing machine[3] has proven to be computationally equivalent to almost any conceivable digital computing machine, including the generations of electronic, biological, and quantum computers to come, the precise limits of which have carefully been laid out in Turing’s paper.

Indeed, since Turing wrote “On Computable Numbers”[4], software has become a central player in modern life: it governs the majority of communications and mass media, controls the sale and purchase of stocks in stock exchanges, counts votes in national elections, guides “smart bombs” and operates machine guns, decides which vaccination our children receive, shortlists job applications, treats depression, operates artificial limbs, guides the navigation of automated and semi-automated vehicles, controls to some degree almost every single home appliance, and constitutes the subject matter of a growing proportion of scientific experiments. The prosperity of the 21st century industrialized (and industrializing) world has come to largely depend on cheap and efficient computing power. The most important constant emerging from these changes has been Turing’s contributions. Notwithstanding the contributions made by Gödel, Church, Post, and Kleene, much of the theory of computing can be taken to be little more than footnotes to Turing’s work.

The Essential Turing beautifully unravels Turing’s role in this revolution well beyond theoretical computer science. Copeland treatment of each one of Turing’s papers shows that Turing has also established the central paradigms in artificial intelligence, artificial life, and the philosophy of mind. Turing’s notion of abstract automata has also had profound implications on every branch of science, including psychology, physics, and genetics. Processes in any branch of science are first and foremost analyzed in terms of a Turing automaton. In particular, Turing’s automata were used to model the operation of quantum gates, DNA sequences, and even the process of intelligent thinking. Indeed, Turing automata have come be part of the lingua franca of the scientific investigation of every physical, chemical, biological, and psychological process that came under scrutiny.

But Turing’s contributions have gone even further. Turing’s profound insights into the philosophy of mind, metaphysics, and science offer compelling answers to questions that have remained open: What are the possible consequences of running a computer program? Is there a limit to the extent to which the behavior of computing machines can be predicted? Can machines be said to think? Can computers behave intelligently? For example, in 1950, having devised that which came to be known as Turing’s Test (Ch. 11, ”Computing Machinery and Intelligence”), Turing predicted that the day in which digital computing machines will pass for humans is near. Although Turing’s time frame has proved somewhat inaccurate[5], it has become evident that Turing is right to a large and ever increasing degree. Turing even foresaw the “software crisis” twenty years before it was declared, suggesting already in 1950 that surprises are inherent to computer programming. These and additional snapshots from Turing’s crystal ball are beautifully unravelled in The Essential Turing.

Copeland’s edition is a first-class guide to Turing’s canon. The anthology includes a complete, annotated version of every important manuscript that Turing wrote. Turing’s manuscripts and Copeland’s commentary thereof are organized into four sections, each of which is dedicated to Turing’s contributions to a separate discipline: the foundations of computing, the construction of the Enigma computer during WWII, artificial intelligence, and artificial life. The anthology also includes a transcript of a BBC program from 1952 during which Turing spoke on the problems of thinking machines (Ch. 14, ”Can Automatic Calculating Machines Be Said to Think?”).

Although Turing wrote (and spoke) mostly plain and always very coherent English, the technical depth of his discussion may put the non- expert at a disadvantage. Difficulties in reading Turing may explain some of the common misinterpretations of Turing’s work, which are subjected to Copeland’s ruthless examination. In particular, Copeland wages a war on Church-Turing Fallacies. According to one, in “On computable numbers” Turing has set limits to human intelligence. A similar fallacy in physics takes Turing’s work to suggest limits to the capabilities of any physical process processes of computation (the Max-imality Thesis[6]). Copeland closely examines each misconception and carefully refutes it. Copeland’s effortless and skilful writing clarifies the precise nature of the difficult problems Turing tackled without trivializing them. He engages in translating Turing’s conjectures into the language of contemporary science, thereby simplifying the technical parts of Turing’s work and allowing the reader to appreciate Turing’s work in full.

Copeland’s insightful, illuminating, and very intelligent commentary also bracket Turing’s work in historical context. As much of Turing’s work is over 50 years old, the accompanying commentary helps the non-expert reader to bridge the time gap created by the evolution of the English language. For example, the naïve reader may mistake the “computers” mentioned in “On Computable Numbers” to stand for digital computing machines. This is an easy mistake to make, given that Turing has taken active part (and even a leading role, for example in the case of Enigma, discussed in the second part) in developing some of the very first digital computers. Contributing to this confusion are theorems which prove that Turing’s notion of abstract automaton precisely defines the limits of computations performed by any digital computing machine. But as Copeland points out, the first digital computing machines came to existence only over a decade after Turing wrote “On Computable Numbers”. Rather, the term “computer” in this paper is taken to mean a person performing a task of computing which does not require imagination or creativity. As it turned out, Turing’s analysis has set the limits to the technology which has evolved during the seven decades which followed in ways which nobody has envisaged. Evidently, Turing’s prophetic power can come to full view only when such terminological nuances are established. Copeland’s attention to detail is geared to root out any misconceptions arising from misreading Turing’s words.

No bibliography on the foundations of computing is complete without The Essential Turing. This attractive package offers an essential text for any scholar of the history, philosophy, or the future of computing, and an excellent textbook for every academic programs concerned with philosophy of mind, artificial intelligence, or artificial life. Copeland’s commentary effortless writing turns reading the works of the father of the digital age into a pleasure, making The Essential Turing an accessible bestseller in popular science.

[1] Amnon H. Eden, Department of Computer Science, University of Essex, United Kingdom and Center For Inquiry, State University of Buffalo, Amherst, NY, USA

[2] In computability theory, the Church–Turing thesis (also known as the Turing–Church thesis, the Church–Turing conjecture, Church’s thesis, Church’s conjecture, and Turing’s thesis) is a combined hypothesis (“thesis”) about the nature of functions whose values are effectively calculable; or, in more modern terms, functions whose values are algorithmically computable. In simple terms, the Church–Turing thesis states that a function is algorithmically computable if and only if it is computable by a Turing machine. The Church–Turing thesis is a statement that characterizes the nature of computation and cannot be formally proven. Even though the three processes mentioned above proved to be equivalent, the fundamental premise behind the thesis— the notion of what it means for a function to be effectively calculable — is “a somewhat vague intuitive one”. Thus, the “thesis” remains a hypothesis.

[3] Turing gave a succinct definition of the experiment in his 1948 essay, “Intelligent Machinery”. Referring to his 1936 publication, Turing wrote that the Turing machine, here called a Logical Computing Machine, consisted of: “…an unlimited memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the scanned symbol and its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behavior of the machine. However, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings. Alan Turing, 1948, “Intelligent Machinery.” (Reprinted in Evans, C. R. (1968) and A.D.J. Robertson, eds. Cybernetics: Key Papers. Baltimore, MD: University Park ), p. 61

[5] See Moor, J. (2000), “Turings Prophecy Disconfirmed”, American Philosophical Association Newsletters, 99, 2 (Newsletter on Philosophy and Computers), Spring 2000.

[6] The maximality thesis asserts that anything computable by any means whatsoever is computable by Turing machine: the universal Turing machine is maximal among computing machines. It is another term for the Church-Turing Theorem.

Tom Clancy’s Net Force

Title:                      Tom Clancy’s Net Force

Author:                  Steve Perry

Perry, Steve (1997). Tom Clancy’s Net Force. Created by Tom Clancy and Steve Pieczenik. New York: Berkley

LCCN:       99610595

PS3553.L245 T66 1999

Date Updated:  June 22, 2015

Tom Clancy’s Net Force is a novel series, created by Tom Clancy and Steve Pieczenik and originally written by Steve Perry. The original series ceased publication in 2006. Relaunched in 2013, and currently written by veteran Tom Clancy author Jerome Preisler, it is set in 2018 and charts the actions of Net Force: a special executive branch of the United States government set up to combat increasing crime and terrorist activity on the internet.

The initial Net Force concept was alluded to in the third Op-Center novel, Games of State; given that Net Force was created by the same two men who created the Op-Center series, it can be assumed that they occur in the same universe. However, no direct connection has yet been drawn between the two.

In December 2013, Net Force was relaunched as a series co-created by Tom Clancy and Steve Pieczenik and written by Jerome Preisler. Preisler is the author of the Tom Clancy’s Power Plays series, which ran from the late 90s until 2004.

The current series, available only as an e-book, is an updated re-boot of the original with a mixture of new and old characters.

The books in the Tom Clancy’s Net Force series so far are:

# Title Publication date Plot
1 Net Force 1999 Russian hacker Vladimir Plekhanov is wreaking havoc using computers, to gain money from security contracts. With the money, he plans to buy governments so he will be rich and powerful. Net Force eventually track him down and capture him in a daring mission to Chechnya. As Director Steve Day was assassinated, Alex Micheals is promoted to Commander of Net Force.
2 Hidden Agendas 1999 Thomas Hughes is an aide to an important government minister. Using his position, he gains access to many secret passcodes and pieces of information. Using his racist assistant, Platt, he posts secrets on the web. All the time, he is diverting attention from his real plan: to steal $150 million and own the government of Guinea-Bissau. Again, Net Force find out his plan and manage to stop him.
3 Night Moves 1999 Peter Bascoomb-Coombes, a brilliant scientist, has created a quantum computer capable of breaking into supposedly secure places. He puts Net Force’s best programmer, Gridley, out of action by inducing a stroke over the ‘Net. The action takes place in England and Net Force eventually apprehend or kill the people involved.
4 Breaking Point 2000 Morrison, another great scientist, uses Extremely Low Frequencies (ELF) to turn large groups of people mad, so they start attacking each other. The Chinese are prepared to pay $400 million for his information and Morrison is prepared to deal. He hires Ventura, an assassin turned bodyguard, to protect him.
5 Point of Impact 2001 Robert “Bobby” Drayne is a chemist who is far away from the competition. He deals in “Thor’s Hammer” – a drug which can make people superhuman in strength and intelligence. He is making money by selling it over the ‘Net. Net Force are asked to help investigate and locate the dealer. He is eventually killed, in a surprising twist, by someone working for a pharmaceutical company.
6 CyberNation 2001 CyberNation is an online world where people live and pay taxes. A controversial idea, it needs a lot more support before Congress will recognise it as a “real” state. Using a team of programmers, they launch attacks on the web that convince people that their ISP is unreliable, thus convincing them to join CyberNation. Net Force stop them before their main attack, but CyberNation does not go down.
7 State of War 2003 This follows directly on from “CyberNation”. We find that after Net Force ended the attacks on the web, the legitimate side of CyberNation continues to flourish and has even launched legal action against Net Force, claiming excessive force during their storming of the CyberNation cruise ship. This however is only a stalling tactic, and CyberNation’s famous lawyer instead finds himself on the wrong side of the law as his hired hitman spins out of control.
8 Changing of the Guard 2003 The Net Force leadership is in transition. An encrypted message is intercepted and partially decoded by Net Force, revealing a list of Russian spies. Samuel Cox, a powerful American businessman, fears that his name is on the list and will stop at nothing to prevent its discovery.
9 Springboard 2005 A top secret Pentagon wargame is hacked. Only Net Force has the expertise to track down the culprit, but they are tied up with other priorities. Due to shifting budget priorities, Net Force is moved onto the DoD budget. That means that as a military operation, they can now give top priority to the Pentagon’s problem. They soon make a connection between the attack and a Chinese general in Macau.
10 The Archimedes Effect 2006 An army base is attacked and NetForce is called in to track down the culprits. It turns out that the bad guys are using a massive online VR game to have people test ways of getting into the bases. Captain Lewis, an attractive computer woman who works with Jay on the case turns out to be the criminal though she tried to seduce Jay throughout the book. She is finally caught in the end.


Tom Clancy’s Power Plays

Title:                      Tom Clancy’s Power Plays

Author:                  Steve Pieczenik

Clancy, Tom (1999) and Steve Pieczenik. Tom Clancy’s Power Plays: New York: Berkley Publishing Group

LCCN:    00514642

Date Updated:  June 22, 2015

Tom Clancy’s Power Plays is a novel series created by authors Tom Clancy and Martin Greenberg. Each entry in the series is written by Jerome Preisler.

The books in Tom Clancy’s Power Plays, so far:

# Title Publication date Plot
1 Politika 1997 In 1999, a deadly terrorist attack stuns the United States, and all evidence points to a member of the Russian Federation’s newly formed provisional government. American businessman Roger Gordian finds his multinational corporation and its employees in jeopardy. Determined to find those responsible for the attack, he calls upon his crisis control team to intervene. But Gordian doesn’t realize how far the terrorists will go – and how much he has to lose…
2 1998 In 2000, when American businessman Roger Gordian refuses to sell his sophisticated encryption program to foreign companies, he suddenly finds his company the object of a corporate takeover – and to say it’s hostile doesn’t even come close. Gordian is the only man who stands between the nation’s military software and a powerful circle of drug lords and political extremists who want to put Roger Gordian – and the rest of the free world – out of business for good…
3 Shadow Watch 1999 It’s 2001, and American businessman Roger Gordian has extended his reach into space. His company has become the principal contractor in the design and manufacture of Orion, a multinational space station. But the Orion project has been targeted by international terrorist Harlan DeVane, whose criminal enterprises thrive on violence and political instability.
4 Bio-Strike 2000 In 2001, criminal mastermind Harlan DeVane has developed – and spread – a deadly, genetically engineered “superbug” resistant to all known cures. DeVane plans to auction off the triggering elements to the highest bidder, but first he’ll use them to destroy the greatest threat to his operations: Roger Gordian, head of Uplink Technologies.
5 Cold War 2001 In this novel, A Up-Link Mars rover team goes “missing” in a storm. Pete Nimec is sent to oversee the search and rescue when the Antarctic base comes under attack.
6 Cutting Edge 2002 For UpLink International and Roger Gordian, the Pan-African fiber-optic ring is his most ambitious – and expensive – endeavor to date. His nemesis, Harlan DeVane, is penetrating the network to gain unlimited access to a most valuable product: information. To ensure his success, DeVane kidnaps Gordian’s daughter. Now, Gordian must trust his UpLink team as never before, as they fight on land and sea to save his daughter and turn the tables against DeVane…
7 Zero Hour 2003 In this novel, a radical Pakistan-based terrorist group attempts to use a powerful laser to release a deadly acid vapor cloud over New York City, using gems from Southeast Asia. There is a reference to the author in one of the conversations.
8 Wild Card 2004 In this novel, Pete Nimec is sent on a “vacation” to a resort on the Island of Trinidad and Tobago. What he finds is a corrupt oil company attempting to sell oil to two of the United States’ deadliest enemies: North Korea and Cuba.