Showing posts with label UAVs. Show all posts
Showing posts with label UAVs. Show all posts

Thursday, June 11, 2009

Behind the Curve

I've had fun thinking Very Advanced Thoughts about military applications of robotic technology, particularly with biomimetic design of individuals and flocking-and-schooling algorithms for groups.

What I've really been thinking are Laughably Behind the Curve Thoughts, though, judging from a set of videos by German robotics developer Festo: penguins of the water and air, rays, and jellyfish.

Incidentally, biomimetic design isn't the exclusive province of German scientists; check out this bit of footage of robot fish schooling at the University of Washington.

Thursday, June 4, 2009

Pickets' Charge

IN WHICH UAVs TRICKLE DOWN TO NON-STATE ACTORS, THEN BACK UP TO ROGUE STATES.


Advances in UAV technology among non-state actors—from California farmers to Hezbollah to Silicon Valley writers—imply the tiny unmanned planes will also see greater use by large companies that need cheap ways to peek over the horizon from their operations.

Internationally, commercial shippers and offshore oil rigs are logical users of commercial UAVs. Both types of businesses are capital-intensive and vulnerable to human threats, and the cost of UAV-based surveillance and defense is small relative to the replacement value of company facilities and employees.

It’s not hard to imagine that blue-water freighters and offshore drilling rigs may soon deploy their own pickets—screens of UAVs that extend their hosts’ alertness and increase warning times in trouble-prone waters. Since UAVs are on their way to becoming fully automatic (particularly since they’ve long performed complex tasks such as crossing the Atlantic on their own), it’s reasonable to expect they might also be programmed to work with one another closely.

However, these concepts can also translate to offense, as this next scenario posits.

Assumptions: By 2015, rogue states mimic Western advances in UAVs and robotics.

Scenario: The South Korean Navy P-3 banked slowly over the crippled freighter. At this distance the P-3’s pilot could easily read the name painted in big white letters near the ship’s bow: MV Maersk Global.

The massive container ship appeared much as its crew had warned it would be, and the P-3’s radio operator started relaying his observations back to the plane’s base at Pohang Airport.

Pohang, Recon One Four here. Target ship is in sight below us in calm seas. Hull appears undamaged. No crew on deck and pilothouse is empty. Burn marks, possibly from small explosions, at several points along decks. Pilothouse window appears blown in and there are burn marks on superstructure nearby, over.

The reply came instantly, since many high-ranking officials were monitoring the P-3’s mission.

Recon One Four, Pohang. Acknowledged. Continue.

The pilot swung around for a pass over the ship’s rear as the radioman continued his assessment.

Pohang, Recon One Four. Blast marks around rudder and possible damage. Screws are not turning and ship is adrift, over.

Recon One Four’s pilot mused that the decks were clear and the ship adrift for good reason: Thirty hours ago the crew had barricaded themselves belowdecks to ride out an attack by a swarm of unmanned aerial vehicles.

Yesterday morning, the Global’s crew was startled to see about 10 prop-driven UAVs flying around their ship. Since their ship was in international waters east of North Korea, the crew assumed the small planes had somehow strayed from an exercise by Pyongyang.

Still, the Global’s navigator triple-checked their location, confirming that the ship was well away from North Korean territory. In fact, it was on almost exactly the same line from Vladivostok to Pusan that it had traveled a dozen times before, always without incident.

So for the next 10 minutes, the crew treated the buzzing UAVs as an interesting nuisance—until the small aircraft turned out to be armed. Crewmen reported later that the UAVs fired a few small but potent missiles at the deck, causing the sailors to scatter. The UAVs then specifically targeted the ship’s rudder, damaging it so that it was stuck in a shallow left turn.

Once the crew realized they had no defenses and couldn't steer, they’d idled the ship’s engines, locked themselves below, and called for help. They would have happily surrendered to human attackers, but none appeared.

Nearly blind, the crew had no idea whether the UAVs were still present and no taste for finding out, considering that some of them had suffered minor burns and one apparently had some light hearing damage.

In the day or so since then, the Global had drifted westward and now was just about in North Korean waters.

The Maersk company quickly persuaded Danish diplomats to contact the North Korean government, which denied knowledge of any incident and any UAVs. Although multiple countries were ready to blame Pyongyang anyway, none of them had monitored any transmissions to or from the UAVs. The aircraft certainly communicated with each other—they had coordinated an attack—but seemingly not with anyone else.

Yes, it was difficult to blame Pyongyang, which also professed outrage that someone had attacked a peaceful commercial vessel so near its territory. To the world’s great surprise, North Korea told Denmark that it would allow anyone into the North’s waters to aid the Global—South Koreans, Americans, whoever the Danes thought might help.

Unfortunately, the North said, since the rescue might be hazardous—UAVs had attacked a freighter, and who knew where they might strike next? The North had to protect its own coast in case they reappeared!—Pyongyang’s own navy would stand off and observe while other nations aided the Global.

And that was how a South Korean P-3 came to be loitering, unmolested, over a giant, abandoned-looking Danish container ship in North Korean seas.

Recon One Four, Pohang. Global crew ask that you verify no UAVs in the area.

Pohang, Recon One Four, that’s affirmative, no UAVs or other aircraft in target’s area. We are alone, over.


The pilot looked down again moments later to see two Global crewmen peering cautiously from behind a heavy-looking hatch that opened out from the superstructure. They waved, and the pilot waggled the P-3’s wings to acknowledge them.

It was remarkable, the P-3’s pilot thought: Zombie aircraft had created a zombie freighter.


Policy Issues: In one stroke, Pyongyang broadcasts that its technological prowess has jumped and that its coast is off limits, all at minimal cost in manpower or political capital.

Tactical Issues: How do you handle attribution in the above case? What countermeasures can a civilian ship’s crew take to defeat the UAVs? Must commercial ships begin to carry their own UAVs for safety? Are dueling automated UAVs a possibility in the next 10 years?

Technical Issues: In the animal kingdom, swarming, flocking and schooling are governed by simple algorithms that regulate an animal’s speed, course, and distance from its peers and other objects. What algorithms would enable aggressive yet useful swarming by UAVs? Can targets employ countermeasures that disrupt those swarms, fooling them into “believing” they are too close together, too close to a target, or acting too aggressively?

Fictional References: The late Michael Crichton’s Prey, which deals with ludicrously advanced swarming, learning nanobots.

Friday, May 1, 2009

The Opaque Society, Part 2

Yesterday’s CounterStory looked at the implications for civilian policymakers of the mass use of automated surveillance cameras to sift through images of people’s movements in search of suspicious patterns. Today I’ll examine a few of the military implications of the same technology.--PK

Military Scenario: Even as civilian courts debate privacy issues in the years 2012-2013, automated mass identification is old news for the U.S. military, whose application of this technology is significantly ahead of civilian programs. In fact, U.S. forces can now target specific soldiers and officers in opposing armies by combining UAV footage, data-mining of enemy nations' records, and social-networking software that reverse-engineers soldiers’ movements and communications to create a remarkably accurate picture of an army’s order of battle.

While this is an unprecedented jump in precision targeting, it has one important side effect: Every military shot fired can now be seen as an assassination. Unless a target is an imminent threat—he is firing or readying a weapon to fire, or directing others to do the same—knowing the target's identity and rank makes it difficult to justify killing him and not his commander.

By 2020, this phenomenon has caused rules of engagement to tighten until few people besides heads of state and senior military commanders are legitimate targets of war. Anyone else who is not actually holding a weapon or directing fire is off limits.

Perversely, just as the U.S. military circa 2009 avoided providing “body counts,” it now avoids specifying the identity or reasons for killing any specific opponent. This makes it difficult to trumpet the “good” news of eliminating a particular enemy leader.

Question: Is the law of armed conflict (LOAC) equipped, and is the U.S. military ready, to handle the phenomenon of large-scale but targeted killings?

Reference: P.W. Singer’s Wired for War

Monday, April 27, 2009

The (Too-) Smart Bomb

Here’s the first of many near-future-conflict CounterStories. As with most good scenarios, this one is based on only a handful of technological, economic or social changes—but those seemingly small changes quickly add up to a different set of decisions that policy-makers might face. Hope you enjoy, and feel free to let me know what you think.--Paul Kretkowski

Assumptions: By the year 2016, global trade and shipping recover from the 2008–2010 recession, but high-seas piracy worsens as well. Manned offshore patrols remain as costly as ever, but technological advances enable cruise missiles to stay airborne for days rather than hours.

Scenario: The U.S. has deployed long-range, semi-autonomous cruise missiles to patrol international shipping lanes in regions where manned seagoing vessels are spread too thin. With human air-traffic controllers (ATCs) monitoring them, these missiles criss-cross areas around the Strait of Malacca and Gulf of Aden, for example, where piracy is rampant but enforcement vehicles and manpower are scarce.

The missiles loiter around shipping lanes and occasionally “interrogate” ships’ captains by radio to try to determine whether they are pirates. The missiles’ computers use a variety of criteria to rate whether or not a particular vessel is likely a pirate—the captain’s responses, ship’s tonnage, registry, destination and manifest. Human ATCs then decide whether to allow a legitimate-seeming vessel to proceed—or demand that an alleged pirate vessel surrender or be disabled or destroyed by the missile.

Since these cruise missiles are “smart” enough to be semi-autonomous, a single human ATC can monitor and manage several missiles at once, overseeing their day-to-day, largely scripted interrogations of international shipping—the robotic equivalent of today’s police DUI checkpoints.

On the high seas as elsewhere, though, manpower and attention are at a premium.

Off the Kenyan coast, pirates manage to seize a commercial ship and begin running it toward the nearest East African port, diverting Navy eyeballs and resources from monitoring cruise missiles deployed near the Gulf of Aden.

The suddenly busy human ATC uses a new option, that of switching most of his cruise missiles over to a fully automatic “crisis” mode while he focuses his attention elsewhere.

The cruise missiles near the Gulf of Aden continue to aggressively question civilian vessels about their ship and their intentions, just as before. Those ships take the missiles’ intrusiveness seriously because the cruise missile is nearly impossible for civilian vessels to counter, and flits in and out of radar range while it makes up its “mind” about the civilians’ status.

For the first time in history, humans are forced to engage in a sort of Turing-test-in-reverse administered by a machine, and must prove to a computer alone that they are not pirates.


This scenario raises several questions. As a “police” system, the cruise missiles and their ATCs normally err on the side of freeing the guilty rather than punishing the innocent. In “crisis” mode, though, the now-autonomous missiles make life-or-death decisions on their own, and opportunities for mistakes multiply.

What is the structure of checks and balances that might allow the ATC and the cruise missiles to make the right call, both morally and in accordance with maritime law? How would laws relating to piracy have to be modified to allow robotic cruise missiles to use disabling or deadly force without a positive order to do so? Is the trade-off of a more widespread maritime presence worth the potential cost in lives, should a cruise missile make the wrong decision?

For some interesting fiction that discusses machine volition and friend-or-foe problems, see Fred Saberhagen’s “Berserker” stories and Keith Laumer’s “Bolo” stories.

Thursday, April 23, 2009

Upcoming Scenarios

For the past few weeks I’ve been periodically creating scenarios based on my reading of “Wired for War,” P.W. Singer’s book on the escalating use of robotic devices in warfare.

If the increasingly lethal, increasingly autonomous machines that Singer describes take the field—self-directing unmanned aerial vehicles (UAVs), demining robots, tracked devices armed with rifles or rockets, small submersibles—they’ll change not just combat operations but the risks and opportunities policymakers face.

I chose to illustrate some of these challenges through scenarios, which may let readers quickly grasp how a new technology may cause a certain kind of future where simply describing that technology might not.

For example, today’s UAVs already make many decisions on their own, constantly adjusting their speed, direction, trim and angle of attack to remain airborne, while humans handle executive-level decisions about targeting and weapons use. The scenario I’ll publish in a few days, “The Too-Smart Bomb,” deals with the tactical and strategic consequences of using UAVs that are just slightly smaller and more independent than today’s.

I’m currently working on five other scenarios that deal with a backlash against face-scanning technology, slow robots that replace fast explosions as terrorists’ weapons of choice, high-speed urban mapping by the military, “magic” bullets that audit themselves, and cruise missiles that can interrogate pirates. Stay tuned for these and “The Too-Smart Bomb” in a few days.