Applying Algorithms to a Changing Transportation Landscape
ID 140582313 © Ekkasit Keatsirikul | Dreamstime.com

Commentary

Applying Algorithms to a Changing Transportation Landscape

Algorithms can produce tangible benefits for transportation systems such as decreased traffic, increased safety and more-efficient transit systems.

In a constantly digitizing world, algorithms govern an increasingly large percentage of the global economy. Algorithms will become ubiquitous in the transportation space and can produce tangible benefits for transportation systems such as decreased traffic, increased safety, and more-efficient transit systems. Governing Transportation in the Algorithmic Age, a report published by the International Transport Forum (ITF), defines algorithms, explains how they may be used in the transportation space, and discusses some of the challenges related to their implementation.

In its simplest form, an algorithm can be defined as “a set of guidelines to accomplish a determined task”. The algorithmic process can be thought of in three unique domains: (I) the input domain, (II) the logic domain, and (III) the output domain.

Information flows into the input domain, either from humans (by entering location and destination in a Metro app) or from a device that senses it from the environment (e.g., a driverless car could sense a stop sign approaching). Next, the logic domain processes that information by using its own internal logic to arrive at a course of action or inaction. Finally, the output domain shows the results. The output function interprets the instructions from the logic domain and then proceeds with the appropriate course of action. In the example of the driverless car and the stop sign, the driverless car would begin to slow down until it reached a full stop before the sign. Through technological advancements, Artificial Intelligence (AI) has now become a key component of algorithms. 

There are two general strategies that comprise AI: symbolic reasoning and machine learning. Symbolic reasoning refers to a predetermined environment where there is an understanding of how the different elements of this environment may interact with each other. These rules are hardcoded into the environment’s architecture, using Boolean if-then logic. Boolean logic is a series of nested if-then statements in a programming language that can guide the algorithm. The code tells the algorithm how to act if something is either true or false and how to act if the opposite is true. The term AI in lay discussion typically refers to the second AI strategy, machine learning (ML). Machine learning applies statistical decision-making methods and interprets the results to make decisions without human intervention or guidance. ML algorithms learn over time by processing large datasets, interpreting them through trial and error, and building on previously learned inferences to continue to evolve and develop.

The benefits of using algorithms in transportation can be tremendous, but they are not without risks. The ITF outlines six broad categories related to algorithmic risks:

(I) Safety and Security

Safety and security focus on ensuring that algorithms do not cause physical damage, or if they do, that damage is minimized. Algorithms are specifically designed to prevent physical damage at all costs, but it is possible that they may inadvertently cause damage.

(II) Data Protection and Privacy

Since algorithms deploy data-processing technologies, ensuring the privacy of the individuals associated with that data is paramount. In certain jurisdictions, there are data protection procedures in place, such as the General Data Protection Regulation (GDPR) that exists within the European Union. Other jurisdictions have their own rules regarding data protection, and one of the challenges faced by those deploying algorithms is how to navigate the different privacy restrictions across multiple jurisdictions. 

(III) Responsibility and Liability

The question of responsibility and liability is central when discussing algorithms. Which parties shoulder the blame when the algorithm fails? In the case of an Autonomous Vehicle (AV) crash, who would be blamed? Would it be the driver, the vehicle manufacturer, the LIDAR manufacturer or the coder(s)? Establishing liability in the event of an accident or failure is difficult. The use of algorithms will be limited until these legal questions are addressed. 

(IV) Transparency and Explainability

As algorithms are often proprietary and written in languages that are challenging for the general public to understand, interpreting an algorithm’s actions can be difficult for regulators and the public alike. Even in cases where a regulator or administrator can observe the algorithm’s code, they may not be able to interpret it fully due to the disconnect between human and machine logic. 

(V) Fairness and Bias

Algorithms are often thought of as being bias-free or being less biased than humans as decision makers, but humans can encode bias into the machine by accident. Such biases are known as “harms of allocation,” meaning that “certain individuals or groups are denied access to resources, services or opportunities.” An example could be someone denied a ride from a ridesharing service because they resided in a high crime zip code. Addressing these biases is crucial to ensuring fairness and maximum effectiveness.

(VI) Welfare and Wellbeing

Welfare and wellbeing of the individuals, communities or societies that interact with the algorithm are of utmost importance. The algorithm’s actions should be assessed based on how they affect the welfare and wellbeing of a community. If an algorithm is technically working correctly but producing unfavorable results on a societal level (perhaps through learned bias), then regulators and policymakers need to address this and work to rectify it.

In addition to outlining the risks of algorithm adoption, the report also examines how human-based decision-making systems and algorithmic based-systems differ in the benefits and challenges that they create. One of the key differences between human-based decision-making systems and those based on algorithms is the question of liability, blame, and procedure. In the case of an accident on the roadway, legal frameworks and insurance networks dictate how to approach the accident and take the appropriate steps to rectify the situation. In a future where transportation is driven by algorithms, the procedural steps in such an event are not as clear as they are in the human system. For example, if an algorithm administered by a ridesharing service developed a negative bias through machine learning it might steer service exclusively toward certain cities or neighborhoods. By limiting or eliminating service based on the algorithm, the ridesharing service may continue preexisting mobility issues that exist in marginalized communities. There is no legal precedent yet dictating the party or parties at fault for the algorithm’s bias, and how and who will remedy the situation. To address these type of challenges, procedures will need to be ironed out before algorithms are implemented on transportation systems. 

The report not only discusses the benefits and challenges of implementing algorithms in the transportation landscape but also makes 10 recommendations on how algorithms can be effectively integrated into the transportation landscape:

  1. Make transport policy algorithm-ready and transport policy makers algorithmically literate
  2. Ensure that oversight and control of algorithms is proportional to impacts and risks
  3. Build in algorithmic auditability by default into potentially impactful algorithms
  4. Convert analog regulations into machine-readable code for use by algorithmic systems
  5. Use algorithmic systems to regulate more dynamically and efficiently
  6. Compare the performance of algorithmic systems with that of human decision-making
  7. Ensure algorithmic assessment goes beyond transparency and explainability
  8. Establish robust regulatory frameworks that ensure accountability for decisions taken by algorithms
  9. Establish clear guidelines and regulatory action to assess the impact of algorithmic decision-making
  10. Adapt how regulation is made to reflect the speed and uncertainty around algorithmic system deployment 

How Algorithms Could Help Improve Transportation

The report included three good recommendations to improve transportation policy:

(I) Digitize Public Space

The “Digitization of Public Space” refers to the idea of transforming the rules of public spaces (i.e. signs, signals, and painted zones) into a digital format. In most jurisdictions, the rules and regulations of a public space exist in written form across hundreds if not thousands of documents and have yet to be codified algorithmically. With level four Autonomous Vehicles (AVs) expected to utilize the roads in less than five years, digitized public space rules and traffic regulations “can already help better manage those spaces, lower the costs of regulatory compliance (and control), and ensure a more proactive and dynamic allocation of those spaces for existing mobility service and freight operators.”

(II) Use Outcome-Based Regulation

“Outcome-based regulation” refers to the idea of understanding what the desired outcome of a policy or law should be, and working backward to achieve it. Contrast this to the current policy-making approach in the transportation space, which focuses on prescriptive solutions that lack creativity and hinder progress. Outcome-based regulations allow the algorithms to learn and iterate, with an ultimate policy goal in mind. As the algorithm continues to learn from new data and new experiences, regulators can observe its progress and update the algorithms as positive or negative consequences become apparent. Outcome-based regulation can only be effective if the appropriate regulatory body has clearly outlined metrics that can prove the policy’s efficacy or lack thereof. An example of the application of outcome-based regulation could be ridesharing.

(III) Use AI to Control Transport Vehicles and Systems

Through the deployment of algorithms AI can be utilized to physically control individual transport vehicles as well as entire systems. The report cites automated metro lines, self-driving vehicles, autopilot functions in aircraft, autonomous drone operation or traffic control in smart traffic centers as examples. For example, driverless cars could constitute the fleet of a ridesharing service. AI, driven by that company’s proprietary algorithms, could seamlessly allow these vehicles to navigate the complexities of modern roadways. The algorithm would not only have the capability of operating the vehicles safely on the roadway, but it could also optimize the quickest route between destinations, and assign the next pickup point before termination of the current ride. Ridesharing services can already accomplish many of these tasks, but AI driven by more advanced algorithms would integrate those existing features and drive the vehicle safely. Integrating AI to control transport vehicles and systems opens the door to transportation outcomes that were previously thought to be impossible, unaffordable or unfathomable. AI, if administered correctly and safely, can lead to an increase in efficiency, safety, and reliability in transport systems. 

Although the report clearly outlines the potential that algorithms have in the transportation world, the implementation challenges cannot be ignored. If safety and security remain paramount as algorithms continue to be implemented, then society can continue to reap their benefits.

Peter Smet is a transportation policy intern at Reason Foundation.