As an MSc research student, you are expected to understand how other areas of computer science relate to your chosen specialism. You must have a grasp of significant current open problems in other specialisms, as well as an understanding of how techniques and methods from other research areas can be applied within your specialism.
In this assignment you must provide an overview of two MSc Research specialism lectures (AI / Networking / Cyber Security / Software Engineering / Data Science) which are NOT the specialism you discussed in Assignment 1.
For each of these two areas, you must discuss the research question presented within the relevant specialist lecture for this specialism, propose some research approaches to investigate this question and identify further work you might undertake which builds on this. You must also explore your personal strengths in this area.
PART 1: ROBOTICS AND AI
PART 2: SOFTWARE ENGINEERING
Part 1: Robotics AI
1. Introduction of research specialism
Human-Robot Interaction (HRI) can be described as the behaviour of human beings toward robots in terms of technological, physical, and interactive features present in the robot. The reason behind developing assistive robots lies in supporting individual living such as elderly living, children, and other rehabilitative support. Functions that an assistive robot performs include reminding about medications to its users, helping users to carry loads like food or other things, delivering companionships like friends or family, and alerting them to hazards. Several factors affect the socially appropriate behaviours of robots. These factors can be described as the physical appearance of the robot, proxemics, gaze direction, and head orientation. This assignment will describe the behaviour of robots as functional social beings by comparing them with existing literature and available research approaches. Finally, the experience gained from this research will be highlighted in this assignment.
2. Open research question
- Is the robot acting as a functional social being would?
Interactive robots are designed with several technologies that can help their social behaviours and interaction performance in their respective domain. As per the views of Holthaus (2021), social credibility of a robot can be measured by examining the way it obeys and interacts with different social norms. Interactive robots are responsive to both verbal and non-verbal communication if they are composed of adequate information. A wide range of research in the area of HRI revealed that multiple factors like a combination of design and physical appearance, and effectiveness in both verbal and non-verbal communication work on building human perception about robots. For example, it can be mentioned that humans have different preferences when it comes to comfortable interaction with robots and they like to apply similar metrics as humans to the robot (Holthaus et al. 2019). It has been found that the physical appearance of robots, for instance, if the robot looks like a human being, can help in developing face-to-face interaction quality.
Wang and Rau (2019) mentioned that acceptance of robots can increase among human beings by qualities like robot navigation that help in delivering personal space to human beings and make adequate passing distances when needed. Besides that, factors like gaze and head orientation of interactive robots are used to measure their focus them and the social bonds they can create with human beings. Alenljung et al. (2019) opined that factors like politeness do not affect interaction quality between humans and robots. Nevertheless, in order to create a positive impact and user engagement, this is considered a significant quality.
The research problem is the poor social credibility of interactive robots in the context of social behaviours that can enhance users’ disengagement.
3. Existing and related work
In the 1960s with the introduction of the first industrial robots, robotic technology was significantly transformed into a manufacturing industry that developed interactive and collaborative robots. According to Zanatto et al. (2020) with this development in robotic technology, several challenges related to the service area of robots have developed which include safe and flexible performance of full autonomy robots, difficulties related to learning from or with human beings. Therefore, to mitigate such challenges and difficulties it is important to improve social credibility of interactive robots by enhancing their performance in speech, gestures, or gaze that can help in natural workflow in its respective domain. On the other hand, Bensch et al. (2017) mentioned that several techniques such as Situation Awareness Global Assessment Technique, Goal-Directed Task Analysis have been adopted so far to increase the quality and access efficiency of HRI. Moreover, a technique named GOMS (Goals, Operations, Methods, and Selections) has also been adopted in HRI to improve interface efficiency. Nevertheless, the problems related to naturalness in rotors movement, vulnerability, expressiveness remain issues in decreasing human perceptions about robots.
Figure 1: Social credibility of robots (Source: Holthaus, 2021)
Kragic et al. (2018) opined that to increase the acceptability of robots among human beings such collaborative challenges must be mitigated with the integration of natural cues, perceiving and expressing emotions. Along with that, trust is another factor that is responsible for successful collaboration between robots and human beings (Kellmeyer et al. 2018). Trust is a fundamental factor that develops in psychology and reflects upon interaction, behaviour, and attitudes. It has been found that effective and cognitive factors mostly contribute to people’s perception of reliability of robots and their social behaviors. Nevertheless, Sciutti and Sandini (2017) argued that it is not only the responsibility of robots to gather all the qualities that contribute to positive social interaction. Humans should also work on building emotional connections with them to build positive and interactive relationships. However, it has to be admitted that social perception and trustworthiness among humans about robots highly depend upon robot behaviours.
The social credibility of robots refers to how believable and authentic it is. As per the opinion of van den Berghe et al. (2019) social credibility of robots and acceptability among users are two individual aspects. For example, if a robot lacks politeness, interrupting users’ conversations also reveals empathy to human emotions then it can be assumed that it is credible but not acceptable.
4. Research approach
The case study has used a preliminary study with the help of 30 participants to learn about their responses about socially credible and non-credible robots. In this research, several questionnaires have been used to measure hazard warnings provided by interactive robots. There were several independent and dependent variables based on which perception of participants was measured. This research was done in a Robotic house with all the participants and they were informed about the remote access to robots. A kitchen’s power plug was there to measure participants and a Pepper worked as a secondary robot for safety hazards (Holthaus et al. 2019). Besides that, there was an experimenter in the house, which worked on monitoring the robots.
This preliminary research has advantaged responses regarding social credibility and safety-related functions performed by robots. It has been found that potential danger for both the kitchen’s power plug and Pepper were high in participants’ responses. Based on this it can be assumed that participants do not trust that safety can be provided by interactive robots. Disadvantages of this research can be referred to as there was no significant difference between two aspects which are According to social Norms (AN) and Violating social Norms (VN) for two robots has been found in this research (Holthaus et al. 2019). Nevertheless, it has been found that positive responses for AN were higher than VN. Analysing a factor it can be assumed that participants believe that interactive robots obey social norms, which means they possess social credibility rather than violating social norms. Furthermore, the result of this research has the advantages of understanding peoples’ perceptions and willingness in responding to warnings delivered by robots. In this context, it has been found that most of the participants have provided a positive reaction to responding to hazards and warnings delivered by socially credible robots.
The open question can be significantly answered with the help of surveys, as it is beneficial in gathering multiple and diversified responses from participants all over the world. Bräuer et al. (2017) mentioned that. A survey is a comparatively easier process to gather information; it requires less time and can be monitored remotely. Different types of questions regarding social behaviour, creditability, safety warnings provided by interactive robots can be included in the survey. Therefore, a survey will be the best alternative to the preliminary study used in the research.
5. Personal investment
Interest in the area of robotics and quality interaction issues in interactive robots has been analysed in this research. By using the open questions several challenges like physical appearances, lack of emotions in robots, trustworthiness related to interactive robots can be answered. Besides that, the role of technologies such as artificial intelligence (AI) in improving the quality of social behaviours in HRI can be outlined. Moreover, information regarding advantages and disadvantages related to HRI can be achieved by using the open question.
Personal experience and strength gathered by this research will help in carrying out further research based on similar areas. Furthermore, each factor that contributes to the poor social credibility of interactive robots can be examined. More technologies that can be implemented to HRI to enhance their performance can be identified by this experience. Greater knowledge about interactive and collaborative robots will help in continuing in-depth research in the future.
Alenljung, B., Lindblom, J., Andreasson, R., & Ziemke, T. (2019). User experience in social human-robot interaction. In Rapid automation: Concepts, methodologies, tools, and applications (pp. 1468-1490). https://dl.acm.org/doi/abs/10.4018/IJACI.2017040102
Bensch, S., Jevtic, A., & Hellström, T. (2017). On interaction quality in human-robot interaction. In ICAART 2017 Proceedings of the 9th International Conference on Agents and Artificial Intelligence, vol. 1 (pp. 182-189). https://upcommons.upc.edu/bitstream/handle/2117/108261/1818-On-Interaction-Quality-in-Human-Robot-Interaction.pdf?sequence=1
Bräuer, J., Plösch, R., Saft, M., & Körner, C. (2017). A Survey on the Importance of Object-oriented Design Best Practices. In 2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (SEAA) (pp. 27-34). https://www.se.jku.at/wp-content/uploads/2017/05/preprint-seaa-2017.pdf
Holthaus, P. (2021). How does a robot’s social credibility relate to its perceived trustworthiness?. arXiv preprint arXiv:2107.08805. https://arxiv.org/pdf/2107.08805
Holthaus, P., Menon, C., & Amirabdollahian, F. (2019). How a robot’s social credibility affects safety performance. In International Conference on Social Robotics (pp. 740-749). https://uhra.herts.ac.uk/bitstream/handle/2299/22060/safetycredibility.pdf?sequence=1&isAllowed=y
Kellmeyer, P., Mueller, O., Feingold-Polak, R., & Levy-Tzedek, S. (2018). Social robots in rehabilitation: A question of trust. Sci. Robot, 3(21). https://in.bgu.ac.il/fohs/Documents/pubs_events/socialrobots.pdf
Kragic, D., Gustafson, J., Karaoguz, H., Jensfelt, P., & Krug, R. (2018). Interactive, Collaborative Robots: Challenges and Opportunities. In IJCAI (pp. 18-25). https://www.ijcai.org/proceedings/2018/0003.pdf
Sciutti, A., & Sandini, G. (2017). Interacting with robots to investigate the bases of social interaction. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(12), 2295-2304. https://ieeexplore.ieee.org/iel7/7333/4359219/08068256.pdf
van den Berghe, R., Verhagen, J., Oudgenoeg-Paz, O., Van der Ven, S., & Leseman, P. (2019). Social robots for language learning: A review. Review of Educational Research, 89(2), 259-295. https://journals.sagepub.com/doi/pdf/10.3102/0034654318821286
Wang, B., & Rau, P. L. P. (2019). Influence of embodiment and substrate of social robots on users’ decision-making and attitude. International Journal of Social Robotics, 11(3), 411-421. https://www.researchgate.net/profile/Bingcheng-Wang/publication/329795578_Influence_of_Embodiment_and_Substrate_of_Social_Robots_on_Users%27_Decision-Making_and_Attitude/links/5da7244a4585159bc3d426be/Influence-of-Embodiment-and-Substrate-of-Social-Robots-on-Users-Decision-Making-and-Attitude.pdf
Zanatto, D., Patacchiola, M., Goslin, J., Thill, S., & Cangelosi, A. (2020). Do humans imitate robots? An investigation of strategic social learning in human-robot interaction. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 449-457). https://dl.acm.org/doi/pdf/10.1145/3319502.3374776
Part 2: Software Engineering
1. Introduction of research specialism
As a future mode of transport, the role of autonomous vehicles (AV) is known as ‘self-driving cars’, whichis increasing with time. The role of software engineering in the context of AV is to make sure safety measurements while using these vehicles. As per the case study, although the capability of AV development is increasing, however, ethical considerations related to these vehicles remain an integral part of social barriers. The ethical issues related to AV can be described as trolly problems, safety issues, and balancing risks. Nevertheless, it has been found that the safety record of AV is comparatively better than the traditional vehicles. This study is going to describe the ethical issues experienced by AV and multiple kinds of literature related to the area. Moreover, the research approaches used in the case study will be critically analysed in this assignment along with highlighting an alternative research approach.
2. Open research question
- How do we factor ethics into AV software engineering?
Vehicles with full autonomy were launched first in the UK roads in the mid-2020s, this trend is increasing, and several countries including the UK are engaging in real-time road trials of the vehicles. According to Holstein et al. (2018), autonomous vehicles can deliver benefits to human beings in several ways such as it can reduce about 90% of death from car accidents, 60% of harmful emissions in the environment, travel time of passengers and can increase fuel economy and consumers’ savings. It has been found that AV requires less gas and energy while driving which reduces environmental emissions and air pollution. Nevertheless, the questions arise from the ethical factors related to AV and problems it can create in the real world. Among all the ethical issues, the trolley problem has been considered one of the most significant issues in AV (Menon and Alexander, 2020).
Besides that, the risks associated with safety issues in AV also influenced the rise of this open question. It has been found that the AV in current time is conducting more accidents compared to the human driving cars although the accidents are less severe (National Law review, 2021). Along with that, most in the countries where AVs are permitted, streets, and roads remained much unmarked which has created confusion for the AV. Bonnefon et al. (2019) mentioned that lack of consistency in stoplights and road signage has created issues for AV. It has been found that autonomous cars use lane markings with the help of cameras that are directed to follow the lane. However, the absence of such marking creates ethical issues by creating risks in using AV.
Research problem for this study is the ethical issues created due to problems in software engineering in AV.
3. Existing and related work
The introduction of AV has raised ethical challenges related to the interaction between AV and human drivers in mixed traffic environments. According to Martínez-Díaz and Soriguera (2018), replacing human drivers with autonomous cars increases the chance of crashes. In this context, several trolley cases of AV can be mentioned that include accidents that resulted in serious harm. For example, if the trolley is going under a truck when there is a presence of fork up head and the trolley continues to work in this condition then serious injury may occur. However, Himmelreich (2018) mentioned that in the real world, troll cases are very rare and AVs are capable of distributing harm with the most effective compared with how a human can handle it. The trolley problem is referred to as a safety issue and it presumes an impracticable level of engineering capability. In terms of serious issues created by trolley problems, AV can act in minimising risk rather than increasing risk. It has been found that at the time of risks AV performs with its full capability and adjusts its behaviour accordingly.
Meschtscherjakov et al. (2018) opined that designing related to AV could be considered as another most significant ethical issue. Machine learning can be used to allow the programmers of AV to design problem-solving tools. In order to maintain ethics in design, the program designers can include training data that are required to avoid accidents scenarios. Similarly, Takács et al. (2018) mentioned that engineering ethics should also be implemented in AV to resolve ethical issues. Engineering ethics are the principles that are used by the developers at the time of development work of AV. In the context of engineering ethics, the developers can involve factors such as integrity, honesty, public interest, fairness, objectivity, accuracy.
As per the views of Fleetwood (2017), risk balancing is another criterion of ethical factors in AV and it is a legal requirement that must be associated with autonomous systems. In order to reduce health and safety risks in AV, the guidance of the governments can be followed to mitigate risk As Low as Reasonably Practicable (ALARP). Hancock et al. (2019) pointed out that ethical issues could be mitigated by the initiatives of developers such as identifications of hazards related to safety, following the principle of multiple risk mitigation, and so on. Besides that, technical capability, cost, resource availability are also significant factors associated with ethics in AV.
4. Research approach
The case study has been completed with the help of secondary sources related to the research area. It has been found that the case study has included several journals or articles of different authors for critically analysing the ethical issues related to AV engineering. The result of the secondary sources has included a wider range of information regarding engineering lifecycle and reflected upon safety management decisions in AV.
The research approach of this case study has advantaged in gathering a variety of information regarding the research area within a less amount of time and effort. It has been found that ethical consideration in AV engineering can be practiced by delivering accurate safety and design required for absolute performance. In order to build an ethical assurance case, a process framework has been designed in the study. However, most of the secondary sources used for the study are not current which means they are more than 5 years old, which created a disadvantage in the context of reliability and validity. The research approach lacks quality and real-time information about ethical issues in AV engineering (Menon and Alexander, 2020). Nevertheless, the factors related to risk reduction in AV have been found in the study. This research study carries the advantages of understanding the road facilities that are significant in reducing risk for AV. Besides that, it has been found that different classes of people may experience difficulties related to AV differently. Disadvantages of this research include a lack of information about what engineering processes, techniques can be followed in AV designing that would deliver greater efficiency and reduce significant risks.
As an alternative research approach, primary quantitative research with the help of interviews can be used for this study. Paquot and Plonsky (2017) mentioned that through this research approach, researchers directly focus on collecting data from people with the help of interviews or surveys. Interviews for data collection help in explaining and better understanding different opinions regarding the research area. Therefore, an interview session with some participants related to the automobile industry and companies that are currently focusing on AV will help in getting a more in-depth understanding of the ethical factors in AV engineering. Questionnaires about safety issues, trolley problems, designing issues in AV would help in collecting more diversified information regarding the research area. Whether the trolley problem is seriously an ethical issue in AV engineering or it is, just a dilemma would be learned from real people related to the industry by conducting interviews.
5. Personal investment
The analysis of information related to this particular area of ethical factors in AV engineering has helped in answering the open questions. Identified issues such as safety risks, ineffectiveness in risk balancing, trolley problems have critically answered the research questions of the study. Besides that, it has been found that using machine learning in designing can help in minimising ethical issues.
Practical knowledge and experience gained through this research will help in conducting further research in the future. Analysis of the provided research paper will help to understand every vital issue that is associated with AV engineering. Moreover, knowledge regarding software engineering and its use in AV will help in describing further issues in such areas.
Bonnefon, J. F., Shariff, A., & Rahwan, I. (2019). The trolley, the bull bar, and why engineers should care about the ethics of autonomous cars [point of view]. Proceedings of the IEEE, 107(3), 502-504. https://ieeexplore.ieee.org/iel7/5/8662725/08662742.pdf
Fleetwood, J. (2017). Public health, ethics, and autonomous vehicles. American journal of public health, 107(4), 532-537. https://ajph.aphapublications.org/doi/pdfplus/10.2105/AJPH.2016.303628
Hancock, P. A., Nourbakhsh, I., & Stewart, J. (2019). On the future of transportation in an era of automated and autonomous vehicles. Proceedings of the National Academy of Sciences, 116(16), 7684-7691. https://www.pnas.org/content/pnas/116/16/7684.full.pdf
Himmelreich, J. (2018). Never mind the trolley: The ethics of autonomous vehicles in mundane situations. Ethical Theory and Moral Practice, 21(3), 669-684. https://johanneshimmelreich.net/wc/uploads/2018/05/Never-Mind-the-Trolley.pdf
Holstein, T., Dodig-Crnkovic, G., & Pelliccione, P. (2018). Ethical and social aspects of self-driving cars. arXiv preprint arXiv:1802.04103. https://arxiv.org/pdf/1802.04103
Martínez-Díaz, M., & Soriguera, F. (2018). Autonomous vehicles: theoretical and practical challenges. Transportation Research Procedia, 33, 275-282. https://www.sciencedirect.com/science/article/pii/S2352146518302606/pdf?md5=c64523b1d6f540a5225314110c8a7830&pid=1-s2.0-S2352146518302606-main.pdf
Menon, C., & Alexander, R. (2020). A safety-case approach to the ethics of autonomous vehicles. In Safety and Reliability (Vol. 39, No. 1, pp. 33-58). https://eprints.whiterose.ac.uk/152973/1/TSAR_2019_0014_R1_just_the_manuscript.pdf
Meschtscherjakov, A., Tscheligi, M., Pfleging, B., Sadeghian Borojeni, S., Ju, W., Palanque, P., … & Kun, A. L. (2018). Interacting with autonomous vehicles: Learning from other domains. In Extended abstracts of the 2018 CHI conference on human factors in computing Systems (pp. 1-8). https://oatao.univ-toulouse.fr/24800/1/meschtscherjakow_24800.pdf
National Law review. (2021). The Dangers of Driverless Cars. Available at: https://www.natlawreview.com/article/dangers-driverless-cars [Accessed 7 December 2021]
Paquot, M., & Plonsky, L. (2017). Quantitative research methods and study quality in learner corpus research. International Journal of Learner Corpus Research, 3(1), 61-94. https://dial.uclouvain.be/downloader/downloader.php?pid=boreal:185993&datastream=PDF_01
Takács, Á., Rudas, I., Bösl, D., & Haidegger, T. (2018). Highly automated vehicles and self-driving cars [industry tutorial]. IEEE Robotics & Automation Magazine, 25(4), 106-112. https://www.researchgate.net/profile/Tamas-Haidegger/publication/329598453_Highly_Automated_Vehicles_and_Self-Driving_Cars_Industry_Tutorial/links/5c1754a792851c39ebf2eb77/Highly-Automated-Vehicles-and-Self-Driving-Cars-Industry-Tutorial.pdf