By Misha Guttentag and JP Schnapper-Casteras

One of Satoshi Nakamoto’s last known public posts, in 2010, was a warning: “Wikileaks has kicked the hornet’s nest, and the swarm is headed towards us.” At the time, PC World had just published an article describing how bitcoin (then trading for a whopping $0.20) could be used for permissionless donations to Wikileaks, despite any bans by local authorities or payment processors. It is impossible to know Nakamoto’s exact concern, but one thing was clear: how people chose to use the bitcoin protocol he released — including how they would spend the bitcoin “currency” it issued — was out of his hands, even if it meant others could use it to potentially break social norms, business monopolies, and local laws. And if bitcoin payments to Wikileaks (or poker sites) were forbidden, could Nakamoto or bitcoin’s other developers be held personally liable for enabling unlawful activity?

Open-source programming often means letting go.

Today, nearly a decade after Nakamoto’s warning, regulators continue to wrestle with waves of permissionless innovation emerging from open-source projects like bitcoin, and to a greater extent, from the Internet itself. A fundamental question remains: should programmers of open-source software — where anyone can build upon, re-release and re-deploy the code — be held responsible if users take the code and violate their local laws or regulations, and if so, where should we draw those lines?

Developer Intent

One possible answer comes from a Commissioner of the Commodity Futures Trading Commission (CFTC), Brian Quintenz, who recently published his thoughts on when, in his estimation, the CFTC should (and should not) bring action against open-source developers whose code is used to violate CFTC regulations. As Quintenz put it, “Absent proof that developers intended that the code facilitate conduct that is illegal, the CFTC should not bring a case against them.” He proceeded to list several relevant questions to consider in determining a “developer’s intent,” including whether the developers were promoting, profiting from, or modifying code in order to enhance unlawful activity.

If the code is a contribution to a bigger system, what role will it play?

For developers who, like Nakamoto, are designing, building, and launching code, these questions of liability are not merely musings: they affect the likelihood of attracting investment, users, and developer talent. Absent more specific statutes or binding authority from regulators, sometimes public speeches and statements are the best sources of legal guidance available.

With Commissioner Quintenz’s commentary in mind, we wanted to highlight a question we expect to surface in the near future: to what extent should open-source developers of bots or algorithms expect to have some legal responsibility for actions that bots take on behalf of their users? Take, for example, a “trading bot” that enabled a user to plug into a trading platform and buy or sell products or digital tokens based on user-provided parameters (e.g., if it rises to XX, sell it — if it falls to YY, buy it). Could a trading bot developer be held liable if a user in a Treasury-sanctioned country downloaded open-source trading software and, via a VPN, surreptitiously deployed on a US-based exchange to sell bitcoin, violating U.S. sanctions? The short answer is likely no, and here’s why:

Guidelines for Code Design

In his speech, Commissioner Quintenz weighed a related question about software that is “specifically programmed” to violate laws, in this case, “to purposely distort the final settlement price” of a futures contract. His approach is worth highlighting here: “The more a code is narrowly tailored to achieve a particular end, the more it appears as if it was intentionally designed to achieve that end.” In other words, if the bot were knowingly designed and intended to manipulate a futures contract (also known as “banging the close”), it would “look a lot like aiding and abetting” of a legal violation — but if it merely allowed a user to make trades at whatever time they wanted, and it is only the user who deploys the software to manipulate a price during a particular interval, the developer should not expect to be held liable.

Additional questions emerge for developers of software that users could operate in ways that transgress Securities and Exchange Commission (SEC) rules, but for those developers, similar principles should apply. For example, if a a developer releases software knowingly and purposefully to enable users in the United States to manipulate trading markets, then the developer might be held liable for contributing to the forbidden action. Recent charges against a software developer involved in designing custom software that enabled sophisticated spoofing of futures markets confirms as much. If, on the other hand, the developer merely releases an open-source trading software — essentially exchange-agnostic, asset-agnostic, leaving the decision entirely up to the user as to which assets to trade, consistent with the laws of the users’ own jurisdiction — then the developer has a compelling case that they should not be held responsible for actions taken (or modifications made) by end-users of their own volition.

Users, Use Care

For users themselves, similar guidelines should apply. On the one hand, if users deploy software to knowingly engage in unlawful activity, they should expect to be held responsible. On the other hand, the risk associated with other types of software use is less clear. For example, imagine a group of card-game enthusiasts who utilize an algorithm to help them trade Magic: The Gathering cards online, an activity that (as far as they know) is legal within their jurisdictions. If their regulators in the future clarify that online trading of Magic cards is somehow outlawed, the group members might expect that any future trades of these cards will incur legal risk, but are less likely to be held liable for previous trades of these cards, since they did not then knowingly engage in impermissible activity. As a best practice, users should take care to check the requirements of their local jurisdiction and adjust their activities accordingly, regardless of the actual functionality of the open-source software before them. As a matter of policy, the onus here should fall on the users, not on open-source developers, since developers would be hard-pressed to monitor, limit, and/or tailor access to their code to meet the evolving jurisdiction-specific needs of each user.

For the time being, Quintenz’s recent focus on a software developer’s intent is a sensible rule of thumb for open source projects of all stripes, including in the context of bots and digital assets. Put differently, developers of OpenOffice should not be held liable if someone uses it to draft a ransom note; on the other hand, a developer clearly designing for illicit use, like releasing a tool called “OpenPassportForger,” would have a comparatively harder time arguing he or she did not intend it to be used to break the law. Even with these broad principles in mind, other market- and jurisdiction-specific rules can still apply in certain circumstances. For example, SEC regulations regarding algorithmic trading strategies conceivably apply to algorithms that trade securities (like stock in eBay and Amazon), while not applying to algorithms that help sellers arbitrage prices between eBay and Amazon platforms, since the products are not securities subject to SEC jurisdiction.

And until the day comes that bots are sentient enough to consult their own lawyers, developers and users of sophisticated automated software should — when in doubt — consult attorneys* of their own.

[*] Disclaimer: Consistent with the terms and conditions of this site and blog, this post is not and should not be treated as legal advice.