Amazon Drones: The gap between vision and regulation

On Sunday’s 60 Minutes interview, Amazon CEO Jeff Bezos unveiled the company’s plans to offer drone deliveries within 30 minutes, igniting discussion filled with excitement and questions. On the forefront, is the issue of when we can expect to see Amazon’s drones on the horizon.

In the US, the Federal Aviation Administration (FAA) is currently devising rules for integration of drones into the domestic airspace, with a view to enabling commercial use by 2015. Although the FAA rules will certainly open up the airspace to commercial drones, it has yet to be determined whether the framework will enable applications like Amazon’s.

In Canada, commercial drones are already in use, however each operation must be approved by Transport Canada through the issuing of a Special Flight Operations Certificate (SFOC). In order to be approved, the applicant must complete a risk assessment and outline steps that will be taken to mitigate the risks to an acceptable level. Normally, it takes at least 20 days to obtain a SFOC, and in many cases (especially for first-time applicants) the process is longer. The Canadian regulations are also under review, as the UAS Program Design Working Group is set to make recommendations for amending aviation regulations by 2017.

Regulators on both sides of the border will have a difficult task ahead – balancing innovation and safety. Hopefully, the new regulations that are set to come out in Canada and the US will not stifle commercial applications like Amazon’s.

Advertisement

Video of my presentation at Stanford Law, “A Licensing Approach to Regulation of Open Robotics”

Video of my paper presentation at the We Robot 2013 conference at Stanford Law School, commented on by cyberlaw expert, Professor Michael Froomkin.

“A Licensing Approach to Regulation of Open Robotics”

On April 8th, I had the opportunity to present my paper, A Licensing Approach to Regulation of Open Robotics at the We Robot Conference at Stanford Law School.  The paper was commented on by cyberlaw expert, Professor Michael Froomkin.  The following is a brief introduction to my proposal.

My paper argues that openness is normally associated with increased innovation.  However, in the case of robotics, open architecture can have an inverse effect by creating a disincentive for manufacturers to develop open systems. The singular preoccupation of open source – as we currently understand it – with software freedom, presents a barrier for manufacturers seeking to restrict or foresee harmful or unethical applications by downstream channels.

What I propose is the development of a new licensing model that promotes “sufficient and selective openness” as a means to regulate open robotics.  The Ethical Robot License (a work in progress that I designed as part of the paper) seeks to temper the inconsistent goals of attaining total software freedom, and promotion of ethical and non-harmful use of robots, by imposing selective obligations and restrictions on downstream applications.  Violation of license terms renders downstream parties liable to upstream channels for breach of contract and intellectual property infringement.

A Review of Ryan Calo’s “Code, Nudge, or Notice?”

In the intro lecture of our Regulation of Internet Communications class, we got to play Lessig’s “dot game,” brainstorming and categorizing solutions to the “kids drive too fast in my neighborhood” problem.  We came up with various options, including increasing police presence, posting signs with speeding fines, driver education, autonomous cars, speed bumps, vigilantism and the “fake deer”…  We then categorized our solutions into Lessig’s four regulatory constraints: architecture, the market, the law, and norms.

In this recent article, Ryan Calo adds a new element to Lessig’s “dot game,” inviting us to go one step further by providing a framework to assist in choosing among different regulatory interventions.  Calo’s mode of inquiry deviates from prior art (Lessig, Thaler & Sunstein etc.) in that he considers the overlaps among the different constraints, and argues that perhaps we shouldn’t get too caught up in the differences between code, nudging, and notice, as the nuances may not be as pronounced as they appear at first glance.  Calo further argues that “whether regulators employ code, nudge or notice, there is almost always the deeper choice between helping citizens and hindering them.”

In selecting among alternative interventions, Calo argues that policymakers ought to favor “facilitation” over “friction”, in particular where procedural safeguards are missing in action.  Calo defines “facilitation” as “helping citizens develop and consummate their intentions”… “helping people arrive at their preferred outcome.”  In contrast, he defines “friction” as “creating barriers – physical or otherwise – to the conduct citizens would otherwise carry out.”

What I wonder is – absent instances of shared interest among regulators and citizens – whether regulators would be motivated to adopt a facilitation approach, and risk that citizens may arrive at outcomes that are not in line with the regulators’ own interests.  The self interested nature of policymakers creates an incentive for them to apply at least some measure of manipulation to ensure that citizens arrive at outcomes that are consistent with their own interests.

Facilitation suggests a shift towards self-regulation, which may lead to an increased risk of regulatory capture. Under the facilitation model, citizens must be given access to information to make decisions.  Such information will likely come from industry, which is by nature self-interested.  Will government intervene to place limits on the information that industry may pass on to people?  If it does, then it is engaging in a subtle form of manipulation by selecting between the messages that people can receive (which moves us towards friction).

Calo’s analysis adds a new and exciting dimension to our familiar dot game scenario…   Anyone up for playing “dot game 2.0”?