Neither human nor robot, a digital police officer (D-PO) is a vision in machine teammates: an artificial intelligence-based partner that can be reached through multiple devices including the patrol car’s on-board computer and officers’ mobile devices. A D-PO has access to multiple data sources including live security camera feeds and criminal databases as well as other D-POs assigned to officers. Scientists and engineers, like those at Pacific Northwest National Laboratory (PNNL), are working in the field of human-machine teaming to bridge the gap between today’s tools and the machine teammates of the future.
Patrol Officer Miller and her reliable D-PO partner have worked together for five years. As they start their patrol, D-PO knows which neighborhoods the pair will patrol based on previous patrols and provides the officer with a situation report on these neighborhoods. Thirty minutes into the patrol, the dispatcher informs them of a reported robbery and provides a description of the suspect.
While Miller drives to the site of the robbery, D-PO monitors camera footage from an autonomous police drone circling the crime scene. Next, D-PO uses its deep learning image recognition to detect an individual matching the suspect’s description. D-PO requests to take over driving so the officer can study the video footage of the possible suspect. The officer accepts as D-PO shares the video on the patrol car’s display and explains the features that led to its high-confidence rating.
D-PO asks, “Do you want to attempt to apprehend this person?” Agreeing that the individual matches the suspect’s description, Miller decides to pursue. D-PO quickly calculates the best route to reach the suspect and presents it to Miller for review. With patrol lights on, the team begins following the suggested route. Although D-PO could drive, they both know that Miller prefers controlling the wheel in times like this.
D-PO notifies dispatch of the plan and updates other D-POs in the area. Through a quick exchange, the D-POs identify which patrol cars are best positioned to provide backup and coordinate with their patrol officers to determine who will respond. Officer Smith approaches the scene from the southwest and will arrive in 10 minutes. As both officers approach the scene, their D-POs track the location and anticipated arrival time of both cars.
Talking as she would with a human partner, Miller asks, “What are my best options for apprehending this guy?” D-PO processes the question along with the context of the situation. D-PO quickly shares three options for apprehending the suspect, including a risk assessment for each one. Since the initial robbery report, headquarters has identified the suspect, his criminal history, and other related data, which are included in the risk assessment and displayed on the center console.
D-PO’s brief auditory description is not enough for the officer to decide, so she needs her digital partner to take the wheel while she studies the various options. “Take over,” she tells D-PO. From previous experience, D-PO knows what this simple command means. “I am taking over driving,” D-PO says to confirm that it understands and will act on the officer’s directive. They then proceed to the scene.
The Essence of Human-Machine Teaming
The above scenario may sound like something from a science fiction novel depicting a distant future. However, many of the technological capabilities described are real. Even though these capabilities already exist, current technology does not behave as a machine teammate because the D-PO described is more than a collection of tools. Many existing tools have one or two of D-PO’s capabilities, but this is not enough to function as a teammate. For example, autonomous systems like drones and self-driving cars are useful, but these systems on their own are not teammates. They require the user, or in this case an officer, to regularly monitor their activity to make sure they are functioning properly. Interactive search engines like Google and Alexa are useful, but they do not anticipate an officer’s needs or take the initiative to help solve a problem the way a teammate would. Sensors and their associated alerts can help direct an officer to important information, but they do not help the officer determine how to act on the information they provide.
Many technological capabilities already exist, but a machine teammate is more than a collection of tools. Scientists and engineers are working to bridge the gap.
It is challenging for developers to integrate these complex capabilities in a way that can support humans as partners and teammates. As a result, much of today’s interaction between humans and their tools (or “machines”) places a burden on the human, who either directs the tool to perform tasks or closely monitors automated assistants to ensure accurate performance. Developers need a deeper and more nuanced understanding of the human-machine dynamic in order to build machines that can work toward larger goals and are capable of doing more than blindly executing tasks.
Machine teammates both enhance team performance and minimize the work required for the human to manage the machine. A good machine teammate has enough autonomy to both perform the job and stay connected with its human partner. Rather than blindly performing tasks, they learn from their human teammates and provide suggestions, support, and backup when their human partners need help. They work toward a larger team goal and support their human partners along the way.
The many D-PO capabilities on display in the example above paint a picture of a “gold standard” in machine teammate development and design in law enforcement. These capabilities can be organized into three broad categories that define a true teammate: a machine teammate should be able to observe, communicate, and act.
Observe
Unlike many current computers, machine teammates have an awareness and understanding of their environment and their fellow officers. These teammates have access to sensors and databases that monitor the environment and help them to adapt quickly when unexpected events arise. For example, in the above example, D-PO accessed video footage from a police drone to help identify the suspect and accessed the patrol car’s onboard sensors to support navigation and driving when needed.
Perhaps just as important, machine teammates should be able to learn officers’ preferences and patterns to predict what officers might need next. For example, the D-PO above anticipated what situation report its partner would need because it learned the patrol route over time. Additionally, D-PO was able to match patterns and recognize images, enabling it to identify the possible suspect in the drone footage.
Communicate
Rather than simply observing the environment and its partner, the machine teammate also makes recommendations and understands instructions. Designing for the ability to communicate naturally and in multiple ways with humans is an important aspect of human-machine teaming research.
Machine teammates conduct analyses and detect significant events in the environment. These teammates must be able to communicate their findings effectively and efficiently. Proper communication often involves navigating the tradeoff between providing enough information to help the officer appropriately trust the technology’s guidance, and not providing so much information that it overloads the officer. Balancing this tradeoff can be challenging. A machine teammate that is sensitive to its human partner’s current focus and workload may help the machine navigate the tradeoff. The machine teammate uses its understanding of the current situation to know when and how to interrupt the officer with its findings.
For example, when D-PO was presenting its three options for apprehending the suspect, it spoke the options to the officer while she was driving. Presenting more detailed information may have caused the officer to lose focus on driving. Recognizing that Miller may need to review a more detailed analysis, D-PO presented this information on the patrol car’s center console display for further review. This approach gave Miller the opportunity to study the options when she had time to focus on the analysis.
A good machine teammate understands the context of the current situation when receiving instructions and tasking from its human partner. For example, D-PO knew who Miller was referring to when she said, “this guy.” The technology’s ability to factor in context when processing human questions and directives makes communication easier for the human. In this scenario, Miller does not need to spend extra time and energy being detailed and precise in her instructions to D-PO. She can be vague and abstract, and the machine can still correctly interpret her requests.
Act
Good machine teammates are proactive. They take initiative to accomplish tasks and direct their human teammates’ attention to new developments when necessary. Designing technology to support tasks without explicit guidance is another focus of human-machine teaming research.
Machine teammates do not always need explicit instructions to perform an action. Based on what they observed and learned, they can complete tasks in anticipation of what is needed without waiting for instructions. For example, the D-PO above coordinates with dispatch and with other officers’ D-POs to arrange backup. D-PO also takes action by directing Miller’s attention to new information, like alerting the officer to the possible suspect in the drone video footage. However, just like human teammates, machine teammates cannot anticipate their human partner’s every move. Therefore, a machine teammate must be flexible and take direction from its human partner.
Although machines need to be able to act with some independence to be good teammates, in most environments the machine and the human should not be given equal decision-making authority. Especially in high-stakes environments like law enforcement, human officers should make the critical decisions. Great care must go into the amount of independence given to the machine teammate and what decisions it can make without approval from the human. For example, it is appropriate for a machine teammate to stop at a red light when given control of driving the squad car. Conversely, it would be inappropriate for this teammate to make the decision to pursue the potential robbery suspect.
Look to the Future
Some elements described in the example above are closer than others. For example:
- Having an automated assistant that could search databases, find and organize information is close to reality. Some projects today are already doing some of this work (like an advanced “Siri”).
- Having an assistant take over the driving is a long way off. Unlike self-driving cars for ordinary highway driving (e.g., controlled conditions, well-marked lanes, not a lot of turns or sudden movements), self-driving police cars require a lot more sophistication (e.g., city streets with traffic and pedestrians, much more unpredictability, need to maneuver at higher speeds).
Machine learning/deep learning to monitor real-time drone feeds automatically is not a near-term capability. However, there are many intermediate approaches that are much more feasible in the short- to medium-term. For example, a machine could assist officers in the police station with reviewing drone footage. When officers spot the suspect, the assistant could communicate location information and recommended driving directions to the officer in the field. Despite technological advances in autonomous systems and artificial intelligence, there is still a gap between current technology and the ideal machine teammate. Laboratories like PNNL are working hard to bridge this gap and make teammates like D-PO a reality. For more information, contact nwrtc@pnnl.gov or visit the Northwest Regional Technology Center.
Human-Machine Teaming: A Vision of Future Law Enforcement
Neither human nor robot, a digital police officer (D-PO) is a vision in machine teammates: an artificial intelligence-based partner that can be reached through multiple devices including the patrol car’s on-board computer and officers’ mobile devices. A D-PO has access to multiple data sources including live security camera feeds and criminal databases as well as other D-POs assigned to officers. Scientists and engineers, like those at Pacific Northwest National Laboratory (PNNL), are working in the field of human-machine teaming to bridge the gap between today’s tools and the machine teammates of the future.
Patrol Officer Miller and her reliable D-PO partner have worked together for five years. As they start their patrol, D-PO knows which neighborhoods the pair will patrol based on previous patrols and provides the officer with a situation report on these neighborhoods. Thirty minutes into the patrol, the dispatcher informs them of a reported robbery and provides a description of the suspect.
While Miller drives to the site of the robbery, D-PO monitors camera footage from an autonomous police drone circling the crime scene. Next, D-PO uses its deep learning image recognition to detect an individual matching the suspect’s description. D-PO requests to take over driving so the officer can study the video footage of the possible suspect. The officer accepts as D-PO shares the video on the patrol car’s display and explains the features that led to its high-confidence rating.
D-PO asks, “Do you want to attempt to apprehend this person?” Agreeing that the individual matches the suspect’s description, Miller decides to pursue. D-PO quickly calculates the best route to reach the suspect and presents it to Miller for review. With patrol lights on, the team begins following the suggested route. Although D-PO could drive, they both know that Miller prefers controlling the wheel in times like this.
D-PO notifies dispatch of the plan and updates other D-POs in the area. Through a quick exchange, the D-POs identify which patrol cars are best positioned to provide backup and coordinate with their patrol officers to determine who will respond. Officer Smith approaches the scene from the southwest and will arrive in 10 minutes. As both officers approach the scene, their D-POs track the location and anticipated arrival time of both cars.
Talking as she would with a human partner, Miller asks, “What are my best options for apprehending this guy?” D-PO processes the question along with the context of the situation. D-PO quickly shares three options for apprehending the suspect, including a risk assessment for each one. Since the initial robbery report, headquarters has identified the suspect, his criminal history, and other related data, which are included in the risk assessment and displayed on the center console.
D-PO’s brief auditory description is not enough for the officer to decide, so she needs her digital partner to take the wheel while she studies the various options. “Take over,” she tells D-PO. From previous experience, D-PO knows what this simple command means. “I am taking over driving,” D-PO says to confirm that it understands and will act on the officer’s directive. They then proceed to the scene.
The Essence of Human-Machine Teaming
The above scenario may sound like something from a science fiction novel depicting a distant future. However, many of the technological capabilities described are real. Even though these capabilities already exist, current technology does not behave as a machine teammate because the D-PO described is more than a collection of tools. Many existing tools have one or two of D-PO’s capabilities, but this is not enough to function as a teammate. For example, autonomous systems like drones and self-driving cars are useful, but these systems on their own are not teammates. They require the user, or in this case an officer, to regularly monitor their activity to make sure they are functioning properly. Interactive search engines like Google and Alexa are useful, but they do not anticipate an officer’s needs or take the initiative to help solve a problem the way a teammate would. Sensors and their associated alerts can help direct an officer to important information, but they do not help the officer determine how to act on the information they provide.
It is challenging for developers to integrate these complex capabilities in a way that can support humans as partners and teammates. As a result, much of today’s interaction between humans and their tools (or “machines”) places a burden on the human, who either directs the tool to perform tasks or closely monitors automated assistants to ensure accurate performance. Developers need a deeper and more nuanced understanding of the human-machine dynamic in order to build machines that can work toward larger goals and are capable of doing more than blindly executing tasks.
Machine teammates both enhance team performance and minimize the work required for the human to manage the machine. A good machine teammate has enough autonomy to both perform the job and stay connected with its human partner. Rather than blindly performing tasks, they learn from their human teammates and provide suggestions, support, and backup when their human partners need help. They work toward a larger team goal and support their human partners along the way.
The many D-PO capabilities on display in the example above paint a picture of a “gold standard” in machine teammate development and design in law enforcement. These capabilities can be organized into three broad categories that define a true teammate: a machine teammate should be able to observe, communicate, and act.
Observe
Unlike many current computers, machine teammates have an awareness and understanding of their environment and their fellow officers. These teammates have access to sensors and databases that monitor the environment and help them to adapt quickly when unexpected events arise. For example, in the above example, D-PO accessed video footage from a police drone to help identify the suspect and accessed the patrol car’s onboard sensors to support navigation and driving when needed.
Perhaps just as important, machine teammates should be able to learn officers’ preferences and patterns to predict what officers might need next. For example, the D-PO above anticipated what situation report its partner would need because it learned the patrol route over time. Additionally, D-PO was able to match patterns and recognize images, enabling it to identify the possible suspect in the drone footage.
Communicate
Rather than simply observing the environment and its partner, the machine teammate also makes recommendations and understands instructions. Designing for the ability to communicate naturally and in multiple ways with humans is an important aspect of human-machine teaming research.
Machine teammates conduct analyses and detect significant events in the environment. These teammates must be able to communicate their findings effectively and efficiently. Proper communication often involves navigating the tradeoff between providing enough information to help the officer appropriately trust the technology’s guidance, and not providing so much information that it overloads the officer. Balancing this tradeoff can be challenging. A machine teammate that is sensitive to its human partner’s current focus and workload may help the machine navigate the tradeoff. The machine teammate uses its understanding of the current situation to know when and how to interrupt the officer with its findings.
For example, when D-PO was presenting its three options for apprehending the suspect, it spoke the options to the officer while she was driving. Presenting more detailed information may have caused the officer to lose focus on driving. Recognizing that Miller may need to review a more detailed analysis, D-PO presented this information on the patrol car’s center console display for further review. This approach gave Miller the opportunity to study the options when she had time to focus on the analysis.
A good machine teammate understands the context of the current situation when receiving instructions and tasking from its human partner. For example, D-PO knew who Miller was referring to when she said, “this guy.” The technology’s ability to factor in context when processing human questions and directives makes communication easier for the human. In this scenario, Miller does not need to spend extra time and energy being detailed and precise in her instructions to D-PO. She can be vague and abstract, and the machine can still correctly interpret her requests.
Act
Good machine teammates are proactive. They take initiative to accomplish tasks and direct their human teammates’ attention to new developments when necessary. Designing technology to support tasks without explicit guidance is another focus of human-machine teaming research.
Machine teammates do not always need explicit instructions to perform an action. Based on what they observed and learned, they can complete tasks in anticipation of what is needed without waiting for instructions. For example, the D-PO above coordinates with dispatch and with other officers’ D-POs to arrange backup. D-PO also takes action by directing Miller’s attention to new information, like alerting the officer to the possible suspect in the drone video footage. However, just like human teammates, machine teammates cannot anticipate their human partner’s every move. Therefore, a machine teammate must be flexible and take direction from its human partner.
Although machines need to be able to act with some independence to be good teammates, in most environments the machine and the human should not be given equal decision-making authority. Especially in high-stakes environments like law enforcement, human officers should make the critical decisions. Great care must go into the amount of independence given to the machine teammate and what decisions it can make without approval from the human. For example, it is appropriate for a machine teammate to stop at a red light when given control of driving the squad car. Conversely, it would be inappropriate for this teammate to make the decision to pursue the potential robbery suspect.
Look to the Future
Some elements described in the example above are closer than others. For example:
Machine learning/deep learning to monitor real-time drone feeds automatically is not a near-term capability. However, there are many intermediate approaches that are much more feasible in the short- to medium-term. For example, a machine could assist officers in the police station with reviewing drone footage. When officers spot the suspect, the assistant could communicate location information and recommended driving directions to the officer in the field. Despite technological advances in autonomous systems and artificial intelligence, there is still a gap between current technology and the ideal machine teammate. Laboratories like PNNL are working hard to bridge this gap and make teammates like D-PO a reality. For more information, contact nwrtc@pnnl.gov or visit the Northwest Regional Technology Center.
Kristin Cook
Kristin Cook is a technical advisor, Visual Analytics, at PNNL. For over 20 years, she has been leading research and engineering projects to help people make sense of their data. Her current work focuses on the theoretical and practical challenges of creating human-machine teams.
Grant Tietje
Grant Tietje is a recently retired project manager at PNNL. He is a former paramedic, police officer, and emergency manager. As a project manager at PNNL, he focuses on research and development of technology for first responders. He can be reached at Grant.tietje@pnnl.gov
Corey Fallon
Corey K. Fallon, Ph.D., is a cognitive scientist at Pacific Northwest National Laboratory (PNNL) with expertise in human factors, cognitive systems engineering, and experimental psychology. His current research focuses on how to transition machines from tools to teammates and assessing the risk of incorporating artificial intelligence to support human-machine teaming. He can be reached at corey.fallon@pnnl.gov
SHARE:
TAGS:
COMMENTS
RELATED ARTICLES
TRENDING
September 2024
Article Out Loud – Securing Cities: The Fight Against Local Level Cyberthreats
Article Out Loud – Tren de Aragua: From Prison Gang to Transnational Organized Crime Syndicate in the U.S.
RELATED ARTICLES
TRENDING
September 2024
Article Out Loud – Securing Cities: The Fight Against Local Level Cyberthreats
Article Out Loud – Tren de Aragua: From Prison Gang to Transnational Organized Crime Syndicate in the U.S.