How Artificial Intelligence Applies to Real Life


At practically any task, a strong AI can surpass humans. Against this backdrop, humans must consider the trade-offs they need to make to build new technologies. Using sophisticated AI effectively would undoubtedly improve human capabilities and accomplishments. However, AI may be utilized for other goals, such as privacy abuses or data breaches, resulting in catastrophic consequences (Zhao, 2018).

AI advancement will inevitably make people’s lives simpler to some level, but the negative consequences must also be addressed. In general, these might include loss of jobs due to automation, disruptions in the social norm, and eventually societal upheaval. Some researchers have termed this transformation ‘artificial sociality,’ which refers to the increasing human reliance on mediated technology, for example, social networks (Rezaev & Tregubova, 2018). Autonomous driving has long been a contentious topic of debate in the realm of artificial intelligence.

Mainly, automated cars can prevent many accidents due to human error and are more energy-efficient (Peng, 2020). Nevertheless, the extent of ubiquity must be addressed, and with increasing popularity, society may need to enact new regulations prohibiting or regulating human driver involvement as a safety precaution. Furthermore, reduced safety incidents will also help reduce costs, as the tax rate paid by consumers may be adjusted. There will be a reduced need to spend on health issues related to automobile accidents and road development costs.

The second factor to consider is the effect on the transportation services business. As autonomous cars grow more common and individuals’ travel habits are reshaped, the cost of public transportation will decrease. Likewise, the demand for navigation accuracy and performance will reach an all-time high, putting the linked technology industry at risk of disruption. This essay investigates how self-driving automobiles are portrayed theoretically and rhetorically and how these themes influence their adoption and application in real life.

Self-Driving Cars in the Context of Sociological Theories

AI is frequently pictured with ambivalent attitudes in today’s society. While some are ecstatic regarding such technical developments, others are wary, arguing that their use could lead to catastrophic consequences. For instance, social robots may be entertaining while also aiding with household chores or functioning as friends. Nonetheless, this innovation poses concerns about privacy and security, which are frequently associated with portable, cloud-connected devices. Additionally, they are susceptible to cyber-attacks because they rely on software and data abuse by malicious governments and actors (Anderson & Rainie, 2018).

Meanwhile, some are afraid that AI would result in employment automation, resulting in the loss of jobs for many people, such as taxi drivers. Accordingly, sociological theories can be used to understanding the root of human ambivalence regarding self-driving cars.

The main theories that will be explored are utilitarianism, Kantian ethics, and virtue ethics. To satisfactorily apply these theories to the topic in question, the following scenario will be used:

A self-driving car with a single passenger is driving on a two-lane highway. A double-decker bus with 30 passengers, including children, is approaching from the other direction. Three individuals are strolling beside an observation trail to the self-driving car’s right. The bus unexpectedly careens into the path of the driverless vehicle. In this instance, the moral dilemma becomes apparent because a decision must be made by the unmanned car with only seconds to decide what to do. At the same time, the manufacturer’s question of what the autonomous car was programmed to do comes into play. When attempting to integrate classical ethical theories into this dilemma, it is instructive to consider who the ethical actor is deciding how autonomous vehicles should crash.


What ethical concepts suggest concerning the selection of disaster programming is seldom clear. Frequently, it is a moral issue that remains debatable. Utilitarian morals promote collective satisfaction while reducing collective misery (Bissell et al., 2020). Thus, a self-driving automobile may be fitted with a sophisticated system that would enable it to perform utilitarian computations on the predicted efficacy of various alternatives far more quickly and accurately than a human driver could. Proponents of utilitarianism might advocate equipping driverless cars with these features and programming them to only fail in situations that maximize predicted usefulness.

Nonetheless, it is not entirely apparent that this is precisely what a utilitarian might advocate. A utilitarian might be conscious that some individuals may be fearful of traveling in utilitarian cars, choosing to travel in vehicles that are designed to value their occupants. Indeed, this is what a 2016 report showed, meaning that many people would prioritize buying a non-utilitarian self-driving car (Ackerman, 2016).

A perceptive utilitarian would consider this and advocate for self-driving automobiles to be designed to protect their users. The utilitarian may advocate for this assuming it were true that increasing the percentage of persons wanting to use autonomous vehicles instead of traditional automobiles might presumably reduce the total traffic accidents and fatalities.

Utilitarian supporters might advocate for whichever approach maximizes general satisfaction. It may imply that consumers should be enticed into driverless automobiles with the assurance that their vehicles will respond in “non-utilitarian” modes in the event of a collision. This also emphasizes the need to determine precisely who the virtuous person is to decide (Nyholm, 2018).

For example, if it is the automobile, then maybe the best approach for it to optimize value might be to inform individuals that it is designed in their preferred manner, but again fail in whichever manner maximizes utility. In the same regard, a moral agent would be the car manufacturer or the regulatory authority that allows these particular automobiles to operate.

The utilitarian developer’s primary goal would have been to minimize fatalities when integrating utilitarianism into the above predicament. This appears straightforward initially; the self-driving car should sacrifice its single occupant by veering off the road to avoid colliding with the 30 bus occupants and three pedestrians. However, the driverless car’s moral calculation must consider various factors. For example, evaluate the likelihood of existing safeguards to avert such a disaster. Maybe the bus has more robust safety precautions than driverless vehicles. If so, the unmanned car’s detectors can legitimately conclude that collision with the bus will not result in the death of the bus occupants. The self-driving car will collide with the bus carefully for its passengers to avoid fatalities.

Kantian Ethics

Kantian ethics essentially embraces several core values capable of serving as universal rules. Based on those values, everyone is regarded as an objective in and of themselves and not as a route to an objective (Ulgen, 2017). According to the theory, it is always ethically permissible to uphold one’s standards in every circumstance than to limit suffering. This approach is founded on the distinction between causing and permitting suffering.

Thus, the theory implies that there exists an ethical distinction between causing an injury and permitting it to occur for the Kantian supporter (Nyholm, 2018). When the theory is applied to the current situation, it is clear what the Kantian would configure the driverless car to do. The unmanned car would be designed not to act, resulting in a collision with the bus, fatally injuring its rider and perhaps the bus passengers, including the children. According to Ulgen (2017), this is because letting harm unfold is more ethically acceptable than taking any measure that could result in the death of everyone portrayed in the scenario.

Virtue Ethics

It is difficult to think of any good ethical principles for how self-driving automobiles should crash. The purpose of virtue ethics is to cultivate and then completely acknowledge a range of essential habits and perfections (Nyholm, 2018). Virtue ethics may be beneficial when considering the ethics of autonomous driving more broadly. According to some studies, how cautious individuals are when driving and how accountable they feel regarding their automobile rest on their cars’ architecture (Coeckelbergh, 2016).

In regards to the case study, a programmer who upholds the principles of virtue ethics may feel that designing the autonomous car to hit the pedestrians to save the bus passengers is illegal. The illegality may be attributed to view that the pedestrians were obeying the law since they were walking on the right side of the road. Crashing on them implies punishing them for following traffic guidelines.

This discovery has several implications for virtue ethics. Specifically, being cautious and accepting accountability for an individual’s conduct are traits humans admire in individuals who utilize dangerous technology such as automobiles. Thus, from a virtue ethical standpoint, the takeaway is that individuals must strive to build and code automobiles in manners that encourage users to drive cautiously and professionally when they operate driverless cars. Self-driving automobiles may thus be referred to as moral technology. In other words, they might develop into technologies that aid in developing human virtues.

The above theories vividly underpin the state of current debates regarding the full adoption of self-driving cars. The focus was on who is the ultimate decision-maker if an autonomous car faces a crash scenario. This scenario likewise raises another ethical dilemma. That is, who should be held accountable when individuals are harmed or even killed by an autonomous car. Of course, the automobile makers would be held accountable, maybe for any technological malfunction that contributed to the catastrophe. Subsequently, this will incentivize makers of self-driving cars to invest substantially in developing high-quality vehicles to ensure the safety of their consumers and limit criminal liability. However, others may also argue that driverless cars should be held accountable in the event of a collision, especially if they are programmed to make decisions during such scenarios.

If self-driving vehicle makers and users are not responsible for incidents ending in injury or fatality, perhaps the automobile should be accountable. However, the reality is that humans cannot surrender machines essentially because they do not have a conscience. Therefore, a reasonable route is to have users of self-driving cars complete a user-consent agreement. Then, in the event of an accident, any of the two might be awarded culpability. However, in the actual use scenario, most owners of autonomous cars and manufacturers will ensure that the cars are insured. With a high price insurance policy, the users of autonomous cars will shift expenses of damage sustained through collisions to their insurers. The manufacturers would also follow the same path.

Discussion and Application in Self-driving Cars Acceptance

Acceptance of self-driving cars depends on individuals’ attitudes towards them. Miller (2019) supports this notion arguing that social theories cannot accurately predict human attitudes towards AI Attitudes are defined in this discipline as all judgments on a particular element of consciousness. Attitudes impact knowledge interpretation and action through emotional, cognitive, and behavioral aspects. When these evaluations are simultaneously perceived as favorable and unfavorable in a given situation, the attitude is said to be ambivalent (Liu & Xu, 2020).

On the contrary, cognitive dissonance is a state of mental discomfort brought about by conflicting or inconsistent thinking patterns and behaviors (Annu & Dhanda, 2020). The reasons for reducing ambivalence and dissonance might be analogous to the motives for reducing ambiguities. By distinguishing cognitive dissonance from ambivalence, perceptual conflict may be investigated as dependent or independent parameters. Accordingly, ambivalent attitudes towards self-driving cars denote the convergence of positive and negative analyses, with the degree of ambivalence changing per the content of the assessments.

Thus, the root of ambivalence toward self-driving vehicles may be attributed to two influences: (1) an absence of knowledge on artificial intelligence and robotics and (2) an innate attitude bias against the advantages or hazards of self-driving automobiles. This attitude is ambiguous for many individuals, straddling the line between positive and negative. Ambivalence abounds in news and media coverage on the development and application of self-driving automobiles. These depictions of driverless cars significantly influence prospective buyers’ perceptions on whether to buy one or not.

The editorial board of the New York Times argues in its piece, Ushering in a Safe, Driverless Future, that mandatory legislation governing autonomous vehicles is vital for the advancement of this technology. The purpose of this essay is to demonstrate mobiles might be harmful while also serving as a definitive remedy to rampant road accidents and traffic problems experienced in major cities worldwide. This article targets all personal car users in the US to convince them that driverless cars have more benefits than disadvantages.

The writers explain their position logically by weighing the merits and downsides from an ethical and rational perspective. The first part of the article implores readers to be rational when considering the real-world use of driverless cars. It emphasizes that autonomous cars are not perfect, given that in recent years they have been involved in serious accidents (Kingsbury et al., 2016). This is an expression of concern regarding the readiness and safety of driverless cars for today’s consumers.

The article cites an example of previous crashes involving autonomous cars by manufacturer Tesla. The car was in self-driving mode before it crashed into a truck trailer while trying to make a turn killing the lone driver instantly. Operating a self-driving car can be dangerous, and this article aims to raise this understanding. This seems to be a plea to the audience’s sentiments, which strikes the fear of driverless automobiles. Conversely, it swiftly debunks such anxieties by offering information from experts on the efficiency and reliability of autonomous automobiles. The editors then reframe the concept to suggest that governments and manufacturers’ moral obligation are to promote and develop driverless cars to save the lives being lost regularly in road crashes (Kingsbury et al., 2016). This powerful statement also leverages sympathy to imply that, despite citizens’ anxieties about autonomous vehicles, many benefits come with adopting the technology.

Consequently, for some people, the advantages of autonomous cars in terms of safety are essential. The promise for self-driving cars to save lives and prevent accidents is based on the fact that human failure is responsible for more than 93 percent of catastrophic collisions (National Highway Traffic Safety Administration, 2021). Driverless cars can eliminate such incidents, protecting their occupants and other road users (Peng, 2020). Moreover, self-driving cars may bring new transportation alternatives to countless more Americans. According to NHTSA (2021), at least 100 million Americans have some type of disability. In most regions around the US, jobs and independent life rely on driving. Driverless cars might provide that type of independence to many more. In short, driverless cars will make it easier for the elderly and disabled to get from one location to another.

Self-driving cars will help alleviate the parking challenges that plague traditional automobiles. Ordinarily, many families have several cars that are used almost at the same time. The number of people who own vehicles will decrease as self-driving cars become more common. A self-driving automobile does not have to stay in one place all of the time. A driverless car can take a person to work, return home and take other family members to their respective destinations saving fuel. Additionally, practically all autonomous automobiles will be electric, emitting no carbon dioxide (Zöldy, 2018). This will reduce carbon emissions, as there will be fewer gas-powered vehicles on the road

Conversely, self-driving cars will result in widespread employment losses, most notably in the taxi business. This will compel commercial and taxi drivers to seek alternative types of work. Employment in the transportation industry will be lost due to automated cars (Pettigrew et al. 14). Individuals in the sector will be forced to seek employment in other areas with no guarantee of success. However, it does not mean that driverless cars will not produce jobs. With the increasing demand for unmanned cars, many job opportunities will arise as new manufacturers are gradually entering the industry.

Additionally, self-driving automobiles confront the potential of cyberterrorism. Any computer system with access to the internet is technically vulnerable to hacking. Programming complex software free of bugs and invulnerable to external attacks is daunting. Hackers can sometimes use defects, like programming weaknesses, to compromise computer systems. Hackers may exploit flaws in more advanced systems, such as driverless automobiles, which rely on real-time data, sensors, and cameras. For instance, driverless car sensors or cameras may be manipulated to enter wrong intersections of highways leading to serious accidents.

There is also the concern of loss of autonomy, capacity, and control to AI machines. According to Anderson and Rainie (2018), code-driven, “black box” gadgets are automatically given the power to make important decisions in digital life. However, it is difficult for people to understand how the tools function since they lack knowledge and background. In this context, individuals give up their freedom, privacy, and authority over their own lives; they have no say in the decisions that are made on their behalf (Howard & Dai, 2014).

These concerns explain why there is a general lag in policies and regulations concerning the adoption and implementation of AI and related technologies (Zhao, 2021). Thus, ambivalence towards AI and similar technologies will remain an obstacle for the foreseeable future.

To this end, consumers must have confidence in self-driving vehicles in order to use them. This infers trust in their decision-making mechanisms, and in the prompt implementation of choices taken. This necessitates greater openness to modern computer algorithms and their applications in support of present autonomous car initiatives. Undoubtedly, it is difficult matter, as the automobile will have to rely on its environmental awareness to determine the appropriate option.


Advanced technologies in artificial intelligence have all resulted in the growth of robotics applications. People will switch to self-driving automobiles at a broader scale in the long term since this transition is inevitable. Like any invention, the benefits and drawbacks of autonomous automobiles have been described persuasively through rhetoric that appeals to users’ reasoning, sentiments, and morals. The extent to which these persuasive strategies impact public opinion towards driverless automobiles is still unknown.

Nevertheless, as discussed in this essay and through the lens of various ethical theories, the safety of AI and self-driving cars will require solid policy backing anchored in strict regulatory frameworks. Engineers and designers must also better comprehend and redefine the framework of traffic control and accident prevention. Driverless cars of tomorrow must engage constantly with other stakeholders, such as other autonomous vehicles, intelligent technology, and traffic control systems. This continual contact should define and shape the environment in which driverless cars operate.


Ackerman, E. (2016). People want driverless cars with utilitarian ethics, unless they’re a passenger. IEEE Spectrum. Web.

Anderson, J., & Rainie, L. (2018). Artificial intelligence and the future of humans. Pew Research Center. Web.

Annu,., & Dhanda, B. (2020). Cognitive Dissonance, attitude change and ways to reduce cognitive dissonance: A review study. Journal of Education, Society and Behavioural Science, 33(6), 48-54. Web.

Bissell, D., Birtchnell, T., Elliott, A., & Hsu, E. L. (2020). Autonomous automobilities: The social impacts of driverless vehicles. Current Sociology, 68(1), 116-134. Web.

Coeckelbergh, M. (2016). Responsibility and the moral phenomenology of using self-driving cars. Applied Artificial Intelligence, 30(8), 748-757. Web.

Howard, D., & Dai, D. (2014). Public perceptions of self-driving cars: The case of Berkeley, California. Web.

Kingsbury, K., Appelbaum, B., Bensinger, G., Cottle, M., Gay, M., Interlandi, J., Kelleyen, L., Kingsbury, A., Schmemann, S., Staples, B., Stockman, F., Wegman, J., & Fox, N. (2016). Ushering in a safe, driverless future. The New York Times. Web.

Liu, P., & Xu, Z. (2020). Public attitude toward self-driving vehicles on public roads: Direct experience changed ambivalent people to be more positive. Technological Forecasting and Social Change, 151, 119827. Web.

Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1-38. Web.

National Highway Traffic Safety Administration (2021). Automated Vehicles for Safety. NHTSA. Web.

Nyholm, S. (2018). The ethics of crashes with self‐driving cars: A roadmap, I. Philosophy Compass, 13(7), e12507. Web.

Peng, Y. (2020). The ideological divide in public perceptions of self-driving cars. Public Understanding of Science, 29(4), 436-451. Web.

Rezaev, A.V., & Tregubova, N.D. (2018). Are sociologists ready for ‘artificial sociality’? Current issues and future prospects for studying artificial intelligence in the social sciences. The Monitoring of Public Opinion Economic & Social Changes. 147(5), 91-108. Web.

Rohde, K., Vukovic, R., Zeldich, M., Ramesh, S., Hershkowitz, J., & Farkas, G. (2021). Benefits & risks of artificial intelligence. Future of Life Institute. Web.

Ulgen, O. (2017). Kantian ethics in the age of artificial intelligence and robotics. QIL, 43, 59-83. Web.

Zhao, W. (2021). Artificial intelligence and ISO 26000 (guidance on social responsibility). AI and learning systems – industrial applications and future directions. Web.

Zhao, W. W. (2018). Improving social responsibility of artificial intelligence by using ISO 26000. IOP Conference Series: Materials Science and Engineering, 428(1), 012049. Web.

Zöldy, M. (2018). Legal barriers of utilization of autonomous vehicles as part of Green Mobility. Proceedings of the 4th International Congress of Automotive and Transport Engineering (AMMA 2018), 243–248. Web.

Cite this paper

Select style


Premium Papers. (2023, January 6). How Artificial Intelligence Applies to Real Life. Retrieved from


Premium Papers. (2023, January 6). How Artificial Intelligence Applies to Real Life.

Work Cited

"How Artificial Intelligence Applies to Real Life." Premium Papers, 6 Jan. 2023,


Premium Papers. (2023) 'How Artificial Intelligence Applies to Real Life'. 6 January.


Premium Papers. 2023. "How Artificial Intelligence Applies to Real Life." January 6, 2023.

1. Premium Papers. "How Artificial Intelligence Applies to Real Life." January 6, 2023.


Premium Papers. "How Artificial Intelligence Applies to Real Life." January 6, 2023.