laitimes

Generative AI such as ChatGPT challenges and responses to academic integrity

Generative AI such as ChatGPT challenges and responses to academic integrity
Generative AI such as ChatGPT challenges and responses to academic integrity
Generative AI such as ChatGPT challenges and responses to academic integrity

Generative AI tools like ChatGPT are AI-powered language model tools that are trained to generate conversations. Two modes of creation (AI-independent writing and AI-assisted writing) have led to discussions about academic plagiarism and falsification. These academic dishonesty phenomena have also caused damage to the purpose of the academic plagiarism system, hindering scientific breakthroughs and institutional progress. The reasons behind the challenges posed by generative AI to academic integrity include the unclear status of AI, the weak awareness of academic integrity and the lax accountability system of users, the insufficient robot ethics setting and accountability mechanism of R&D managers, and the backwardness of academic misconduct detection technology. In order to ensure the progress of scientific research, the legal position of generative AI and its products should be clarified as soon as possible, the concept of academic integrity and accountability should be strengthened, the responsibilities of R&D managers should be clarified, and the detection technology for academic misconduct should be gradually improved.

Generative AI such as ChatGPT challenges and responses to academic integrity

introduction

ChatGPT is a chatbot program developed by OpenAI in United States, which quickly attracted a large number of users around the world once it was released. With the popularity of ChatGPT ensued, controversy over alleged academic dishonesty in aiding academic research. In response to students using generative AI to complete exams, many universities have explicitly treated generative AI-assisted writing essays such as ChatGPT as cheating. A variety of academic journals at home and abroad have also given a general negative answer to the participation of generative AI such as ChatGPT in paper writing. At present, universities and journals have taken one-sided measures to completely deny ChatGPT's challenge to academic integrity, and failed to accurately respond to the challenge of generative AI to academic integrity in combination with the current situation of education, cutting-edge issues in AI development, and detection systems. To deal with the challenges posed by generative AI to academic integrity, it is necessary to understand its working mode and the corresponding challenges of its working mode, analyze the reasons behind it, and try to propose countermeasures that are suitable for the current situation of science and technology and education.

1. The concept and introduction of generative AI such as ChatGPT

First, a brief introduction to generative AI such as ChatGPT and its concepts will be given.

(1) Concept

ChatGPT stands for chatgenerativepre-trained transformer, which is understood in terms of naming: chat-native is translated as "generative dialogue", indicating its conversational function as generative artificial intelligence; Pre-trained means "pre-trained", indicating that it requires pre-training by humans; Transformer translates to "model" and indicates the essential properties of its computer program. ChatGPT can literally be defined as a largelanguagemodel (LLM), a machine learning system that learns autonomously from data and, after training on a large set of text datasets, can produce complex, seemingly intelligent writing. In short, generative AI like ChatGPT is an AI-powered language model tool that learns and trains to mimic human conversations.

Compared with traditional generative AI robots such as Jasper, Siri, Socratic, and Xiaodu, ChatGPT has made great progress in simulation, and it can not only answer all kinds of tricky and outlandish questions from users, but also imitate human tone, logic, and emotion to make corresponding answers, and its performance has been greatly improved. At present, ChatGPT has passed a variety of difficult professional-level tests with its advanced technology, including the United States Medical Licensing Examination, Google Coding Entry-level Engineer Test, etc. ChatGPT's ability to generate conversations has also attracted a large number of college students to use this type of generative AI for essay writing, causing concern in the education community. According to a survey of more than 1,000 students abroad, more than 89% of students use ChatGPT to help with homework. A university professor once told the media that the best papers in his class were also created by ChatGPT. The widespread use of ChatGPT has raised a lot of concerns about education and research, fundamentally because it threatens academic integrity by helping to generate scientific results.

(2) The working process and creative mode of generative AI

1. Work process

ChatGPT, along with tools such as Jasper and Socratic, is generative AI, which is an artificial intelligence that generates new content, such as text, images, and music, based on human instructions. These machines search, retrieve, store, and practice and optimize on large datasets, using machine learning algorithms to generate new content similar to the training data. Through this process, generative AI is able to create "works" or "inventions" such as text, art, music, etc.

Generative AI such as ChatGPT challenges and responses to academic integrity

Figure 1 The working process of generative AI such as ChatGPT

The working model of generative AI, represented by ChatGPT, is usually shown in Figure 1. In the input stage, the maker builds the language model and gives it the functions of data search, retrieval, storage and processing, and provides it with a large amount of data information. The purpose of this function is twofold: first, it can help generative AI conduct large-scale data training and quickly retrieve information, and second, it can use its storage and processing analysis functions to learn the interlocutor's thinking and help imitate humans during the conversation. In the palm of your hand, generative AI has two tasks: one is to train with a large amount of data and generate and optimize the corresponding algorithms. The second is to carry out data allocation, data retrieval, and human-like authoring according to the code definitions (questions) entered by users. In the output phase, generative AI expresses the language of the organization. From the working process of generative AI, it can be seen that the data stored by AI in the input stage reflects manual screening, and the instructions received by AI in the palm stage reflect human goal orientation, in fact, the creative process of generative AI is still human-oriented.

2. Creative mode

There are two main modes in which ChatGPT helps humans create: one is AI independent writing, which means that the interlocutor defines the subject of the creation and the AI independently creates the whole thing. In this mode, the user only plays a role in goal setting, and the rest is done by the AI. It can be said that in this case, the generated product belongs to the "original work" of artificial intelligence. The second is AI-assisted writing, which means that AI only assists humans in collecting, sorting, and analyzing the data and materials required for the research topic, and the main work of academic articles is still completed by users. This process is a process of creating works with human originality, similar to the "collaborative work" between humans and intelligent robots. However, both the AI independent writing mode and the AI assisted writing mode are very prone to academic misconduct in real life.

2. Challenges to academic integrity

There are two main problems brought by ChatGPT to the academic community due to its excellent professional creation ability: at the practical level, it mainly includes academic plagiarism and academic fraud; At the institutional level, generative AI, represented by ChatGPT, constitutes a fundamental undermining of the purpose of the academic integrity system.

(1) Realistic level

1. Academic plagiarism

The AI Independent Writing mode is the primary mode for alleged academic plagiarism using generative AI. Academic plagiarism is academic plagiarism, which refers to "the act of using improper means to steal other people's opinions, data, images, research methods, written expressions, etc., and publish them in their own name". At present, there are two main forms of suspected academic plagiarism in the use of artificial intelligence for academic creation, one is that college students use generative artificial intelligence to write or polish final papers, which is a common phenomenon in colleges and universities. The second form is to choose to use generative AI to write papers and submit them to academic journals for publication. In response to these two types of academic plagiarism, universities and journals have also responded to it. For example, the University of Hong Kong explicitly prohibits students from using ChatGPT for final thesis writing, and once it is discovered, it will be determined as plagiarism and cheating, and corresponding punishment will be imposed. Internationally renowned journals such as Nature and Science have also banned ChatGPT from being listed as a co-author. Although the editorial offices of universities and academic journals have made corresponding countermeasures to deal with AI plagiarism, these countermeasures are not universal, and the use of generative AI to assist academic research is still a gray area.

2. Academic fraud

AI-assisted writing is the main mode of using generative AI to induce academic fraud. The use of artificial intelligence to lead to academic fraud includes two forms: one is fabrication, which is the act of fabricating or fabricating data and facts; The second is falsification, that is, the act of deliberately modifying data and facts to make them lose their authenticity. Generative AI refers to erroneous data and facts, or falsifies, fabricates, or modifies data and facts, which is a common practice when answering questions from users. According to ChatGPT users, ChatGPT often falsifies references, fabricates experimental data, and cites wrong data when helping them in academic creation.

There are three reasons why ChatGPT "lies": First, lies exist in the way generative AI like ChatGPT is built. As a conversational bot, ChatGPT's code requires that questions must be answered, and it will not hesitate to fabricate and distort facts in order to complete the code requirements. Secondly, ChatGPT can learn human information, including the lies that exist in the human world, and it is not difficult to learn to "lie" about the first high-tech smart product. ChatGPT's ability to "lie" is also the result of continuous learning from humans. In addition, generative AI has limited ability to distinguish the authenticity of information and cannot distinguish lies, and it is very easy to collect wrong information when collecting information, such as scientific research results and data suspected of academic fraud, and misquoting this information will cause the consequences of falsification. Finally, because generative AI cannot completely restore information, especially for blurry pictures or artificially encrypted language, such as the use of pinyin abbreviations, homophones and other forms of communication by users of some software in order to avoid the platform's AI censorship, this information will be misleading to generative AI such as ChatGPT. As a result, even diligent and honest scholars who use generative AI as an aid to academic creation will still unintentionally commit academic fraud because of ChatGPT's "lies" or mistakes.

(2) Fundamental undermining of the academic integrity system

Academic integrity refers to the basic code of conduct that should be adhered to in academic activities. Its purpose is to ensure the healthy development of academic research and maintain the steady progress of science and human society, and the destruction of academic integrity by generative artificial intelligence represented by ChatGPT will have a long-term negative impact on the development of human science. The impact of generative AI on academic integrity mainly hinders scientific breakthroughs and undermines the construction and improvement of institutions.

1. Hindering scientific breakthroughs

Finding problems from empirical facts, and then finding solutions from problems, is the fundamental way to promote scientific breakthroughs and social progress. Although ChatGPT's super organization and output capabilities are amazing, the current level of development of artificial intelligence is still low, and generative AI represented by ChatGPT is still a weak artificial intelligence. Unlike strong artificial intelligence, which is competent for all human work and has a certain ability to plan and solve, and unlike super-artificial intelligence, which surpasses the highest level of human intelligence, weak artificial intelligence does not have autonomous consciousness and cannot really discover, reason and solve problems. Weak AI, also known as restricted domain AI or applied AI, can generally only focus on and solve problems in specific fields, and can only make decisions and take actions based on data within the scope of designed programs, such as face recognition technology, translation robots, sweeping robots, etc. Specifically, weak AI does not have autonomous consciousness, nor can it experience and accumulate real-life experience, cannot discover scientific and social problems from life experience, nor can it plan and solve problems beyond the set field, so it cannot produce academic results that promote social progress. In addition, the fundamental model of artificial intelligence such as ChatGPT, as an LLM, is that dialogue is to solve problems that humans have discovered, rather than asking questions to solve. The creation of academic results by artificial intelligence can only be based on existing human problems and existing solutions, but cannot generate problem awareness and innovative measures. If humans rely too much on generative AI for academic creation, the spark of questioning and innovation will be annihilated, leading to a dead end in human scientific development.

The problem of lack of awareness and innovation ability of artificial intelligence can also be proved by the outline of the paper written by ChatGPT. According to the paper outline generated by ChatGPT randomly investigated by the author, it can be seen that ChatGPT can only make superficial discussions on the research topic according to the existing situation, and its thesis ideas are all textbook-style papers, lacking basic problem awareness. Problems are the logical starting point of all scientific research, and a theoretical problem arises when recurring phenomena challenge an existing theory. The purpose of our academic creation is not to accumulate materials in papers to study existing theories, but to give full play to human subjective initiative and solve problems at the practical and theoretical levels through interpretation and innovation, which is also the fundamental method for the progress of all natural and social sciences. Robots can only weave papers based on the use of human thoughts or problems found, not problems. If artificial intelligence gradually becomes the actual subject of scientific research, it will completely destroy the source of scientific research.

2. Undermine system building

From the perspective of promoting institutional innovation and improvement in legal research, legal issues are fundamentally issues of interest measurement, and robots have no actual life experience, cannot measure their value, and cannot choose a legal system that adapts to the current situation of human society. Taking the issue of the abolition of the death penalty as an example, the issue of the death penalty has been controversial around the world since Beccaria proposed the abolition of the death penalty. Under the current trend of lighter sentences, countries have reached a consensus on reducing the use of the death penalty, but according to a survey report conducted in 2020 in 25 provinces, municipalities and autonomous regions in China, among more than 30,000 respondents, the proportion of support for the death penalty is as high as 88.39%. In the face of the conflict between the gradual abolition of the death penalty at the decision-making level in mainland China and the people's concept of the death penalty, the academic community is actively looking for a solution. As a robot without autonomous consciousness, generative AI cannot participate in social life, neither can understand the concept of revenge and deterrence behind the death penalty, nor can it verify the advantages and disadvantages of the death penalty preservation and abolition system in social life. Moreover, if AI is allowed to participate in academic research, it will not create a situation in which machines determine the fate of mankind. This result not only violates the institutional purpose of academic integrity to promote scientific development and promote the progress of human society, but also completely subverts human society.

3. Reasons why generative AI undermines academic integrity

There are several reasons why generative AI undermines academic integrity.

(1) Root cause: Disputes over the subject qualification of artificial intelligence

The root cause of the repeated prohibition of the use of generative AI for academic misconduct is that the legal status of AI is unclear and its products cannot be protected. On the one hand, although the generated works embody human orientation in the data input, creation and output stages, since the AI works do not reflect human originality, it is not appropriate to treat the works of natural persons as user works for protection. On the other hand, in the era of weak AI, AI has not yet been autonomous, and the laws of various countries have not given AI the status of a legal subject, AI-generated "works" or "inventions" are not worthy of protection in the intellectual property laws of various countries, and it is even more difficult to pursue liability for works that distort, tamper with, plagiarize and use generative AI without permission.

For example, in 2016, the European Commission's law firm submitted a motion to the European Commission, requiring the most advanced artificial intelligence robot to be given a second-class personality and given specific rights and obligations. In 2017, Sophia, an intelligent robot researched by Hansen Corporation of United States, was granted citizenship by Saudi Arabia, which also means that Sophia, as a robot, has the same civil and political rights, social rights and civil rights as other Saudi natural citizens. However, the legislation of most countries does not address the subject qualifications of AI.

The academic community is also controversial about the subject qualification of artificial intelligence. In modern society, the dichotomy of subject and object in civil law is also faced with challenges such as the fact that human genetic material cannot be simply regarded as an object material. Special objects represented by artificial intelligence may also be given the status of legal subjects. In the academic community, the pro-advocates that AI should be given a limited or inferior personality, while the opposition advocates that the ultimate responsibility bearer of AI behavior is a human being, and it is meaningless to give it subject qualifications. Some scholars have advocated a dualistic structure of robot writers and intellectual property owners to protect the copyright of AI-generated works. From a philosophical point of view, according to the theories of "dichotomy between man and thing" and "man is the goal", artificial intelligence robots in the weak era do not have volitional ability, and have intelligence but no human mind and spirituality. Therefore, weak artificial intelligence, represented by generative artificial intelligence, can only be a tool of human beings at present, and cannot obtain the qualification of civil subjects. For the strong artificial intelligence or super-artificial intelligence with a certain degree of autonomous consciousness that may appear in the future, it can be further explored by drawing on the theory of the subject of civil fiction.

(2) Users: Poor awareness of academic integrity and lax academic accountability system

1. Poor sense of academic integrity

From the perspective of the majority of academic researchers, another reason why generative AI undermines academic integrity is the problem of poor academic awareness in the academic community. Since the incident of Zhai Tianlin's thesis fraud, the state has gradually strengthened the supervision of academic integrity issues, and has carried out a series of behaviors to prevent academic misconduct, such as random checks, duplicate checks, and blind reviews of graduation theses, which have been carried out for four years. However, for a long time before, the cultivation of academic integrity awareness from universities to research institutions was relatively loose, and the vast majority of universities and research institutions did not have academic integrity courses, lectures and other teaching arrangements in their student training plans, resulting in a lack of academic integrity and a lack of rigorous academic philosophy for some students and researchers. Moreover, at present, universities and research institutions have not formed a good competition mechanism, and there is a bad orientation of "paper-only theory". The idea that most people choose to plagiarize existing excellent papers in order to complete their study and work tasks and strive for excellence is even more pronounced after the widespread use of generative AI such as ChatGPT.

2. The academic accountability system is relaxed

The mainland's academic accountability system is too lax, which is manifested in the lack of common recognition and joint disciplinary measures for academic misconduct by various platform institutions, which also leads to the academic misconduct of scholars becoming the "secret" of their units and not known to outsiders, and many scholars who have been investigated and punished for academic fraud have turned into other places to conduct research. Many academic fraudsters are only expelled from their original research institute after an incident of academic misconduct, and are able to go to another university or research institute to conduct research. Except for the titles and rewards obtained for falsifying the results, the rest of the punishments have not been implemented. Perpetrators of academic misconduct can still teach and research after receiving punishment, even without a ban period. Compared to such a low cost of academic misconduct, the benefits of fraud if it are successfully undetected far outweigh the cost of academic misconduct.

(3) R&D: Imperfect robot ethics and unclear responsibilities of R&D

On the one hand, robot ethics and control are not perfect. The three ethics of robots were born in the 50s of the 20th century, which is no longer suitable for modern society, and the ethics of robots need to be improved urgently. Specific to generative AI, the current laws and industry norms do not require the anti-cheating ethics of generative AI, which leads to developers developing and designing the functions of robots without restraint in order to seize market share. At present, all generative AI developers have not implemented the robot ethics that prohibits helping humans to engage in academic cheating into the research and development process, and have not even implanted anti-cheating programs into generative AI, which leads to the fact that robots can assist users in academic misconduct without restrictions, allowing developers to exploit legal loopholes to undermine the existing academic environment. On the other hand, as mentioned above, generative AI provides false information, and the resulting academic fraud is not subject to liability by the law. This also makes developers more unscrupulous to deceive users, resulting in many users inadvertently reaping the consequences of academic fraud.

(4) The detection technology of academic misconduct is relatively backward

In mainland China, the AMLC system developed by CNKI in 2008 is currently the most used detection system by domestic journal publishers, and its advantages are fast detection speed, high accuracy, strong anti-interference, and various file formats supported. Subsequently, Wanfang Database and VIP Information also successively launched WFSD system and WPCS system, which support personal use compared with AMLC system. Beijing Wisdom Tooth Data Exchange Technology Co., Ltd. has launched the PaperPass detection system for most college graduates, and its fingerprint comparison database is composed of more than 90 million academic journals and dissertations, and more than 1 billion Internet web databases. The above four types are the most common detection systems used in mainland China, but there are still many problems: first, the detection results of each system are quite different; Second, the papers in the database are incomplete; Third, compared with the progress of generative artificial intelligence technology, the detection algorithms of the above systems have insufficient coping capabilities; Fourth, the distinction between self-citation, other quotation or plagiarism is not precise enough; Fifth, it is not possible to detect the pictures and charts in the paper.

4. Generative AI's response to the challenge of academic integrity

The fundamental reason for human beings to use artificial intelligence to commit academic fraud is that the subject status of artificial intelligence has not been determined, and robot works have not been protected. However, due to the current era of weak artificial intelligence, artificial intelligence has not completely escaped from manual control and does not have independent willpower and responsibility ability, so it is not appropriate to determine it as a "human". The theory of the fictitious civil subject of the civil law can be appropriately drawn, which can not only give the products of artificial intelligence legal protection qualifications, but also urge the actual controller of artificial intelligence to regulate the application of artificial intelligence. On the basis of giving it the main body of the formula, it is necessary to study and implement the countermeasures that can be implemented from the actual situation.

(1) Gradually establish the qualifications of generative AI entities

Gradually promote the granting of the main status of robots, protect works, and refer to and apply the patent legal system to regulate academic dishonesty. To deal with academic misconduct caused by plagiarism of AI works, it is fundamental to explore the possibility of the legal subject of AI as soon as possible. In the field of intellectual property rights of AI-generated objects, it is necessary to consider not only solving practical problems and meeting the purpose of the system, but also paying attention to the legal basis. At present, the basic subject qualification of artificial intelligence has not yet been determined, but the problem of generative artificial intelligence is studied in a special field - academic integrity, and there is a dilemma of rootless wood and sourceless water. "Legal theories and legal systems should pay attention to and respond to new problems brought about by technology in a timely manner, and they should not cling to their shortcomings, but they should not be separated from systematic thinking." Therefore, it is necessary to pay more attention to exploring the legal subject of artificial intelligence.

Regarding the qualification of the legal subject of artificial intelligence, weak artificial intelligence has no free will, no self-consciousness, and is dominated by human beings, so artificial intelligence cannot become a "human" at this stage. However, with the development of science and technology, strong artificial intelligence and super-artificial intelligence with a certain degree of self-awareness may become the subject of legal fiction in the future. Taking legal personality as an example, when an AI can own property in its own name, it may be able to recognize its qualification as a fictitious legal entity. In terms of the rights of AI-generated works, the developers, investors, users, or other actual controllers are the subjects of the will of the AI, and as the ultimate object of rights and obligations, they work to regulate the generation of AI and maintain the works generated by intelligent robots. Actual controllers such as developers and investors can also set up special assets for AI works for management. For the infringement of the property rights of robot works, the administrator shall pursue the responsibility on behalf of the administrator, and the rights protection fees and infringement compensation shall be attributed to the special property of the robot.

(2) Users: Establish a sense of academic integrity and improve the accountability system

1. For those of good faith

First of all, it is necessary to adhere to the concept of people-oriented academic research and establish a sense of academic integrity in rigorous study. It is urged that researchers realize the lack of problem awareness and innovation ability of weak artificial intelligence such as ChatGPT, and understand that real academic achievements need to be derived from social practice and life, and must also pass the test of practice, and the academic results generated by artificial intelligence are only on paper. Second, generative AI should be insisted on as an auxiliary to human practice and academic research. Researchers are allowed to correctly use generative AI to conduct auxiliary academic research work such as data collection, case retrieval, source comparison, experimental data analysis, and paper optimization, so as to improve the efficiency of scientific research. Under the premise of complying with academic norms and academic ethics, it assists in summarizing and sorting out existing research results, overcoming some defects of existing computers, and improving the efficiency and quality of scientific research. Finally, universities and research institutions should strengthen academic integrity education for students and researchers. Colleges and universities and research institutions shall carry out academic creditworthiness education activities, stimulating all researchers to understand the importance of the academic creditworthiness system. Carry out academic integrity courses in colleges and universities, teach academic misconduct, academic citation knowledge, and other academic integrity knowledge such as academic fraud incidents and the consequences of fraud, and cultivate and strengthen the awareness of academic integrity among researchers.

2. For dishonest people

Strict accountability rules must be put in place for those who are dishonest, including both legal liability and industry responsibility.

In terms of legal liability, if there is plagiarism, falsification and other academic dishonesty when writing the dissertation, it will be held accountable for life, and once discovered, the corresponding degree and subsequent degree will need to be cancelled. If a researcher submits a paper in a journal with academic dishonesty and causes serious damage to the author's university or research institution, he or she shall bear civil liability such as refunding the manuscript fee and compensating for the loss of reputation to the author's unit and journal, and may be given penalties such as withdrawing scientific research awards, canceling professional title promotions, expelling or prohibiting submissions in accordance with the requirements of the rules and regulations of each unit and journal. On the basis of the requirements of the "Several Opinions on Further Strengthening the Establishment of Creditworthiness in Scientific Research" issued by the General Office of the CPC Central Committee and the General Office of the State Council, actively carry out theoretical research on criminal regulation for conduct that seriously violates the requirements of creditworthiness in scientific research, and promote the legislative and judicial departments to introduce corresponding criminal sanctions in a timely manner. This also means that academic misconduct may be subject to criminal liability in the future.

In terms of industry responsibility, it mainly refers to the fact that all universities and research institutions shall carry out joint disciplinary action against those who seriously violate academic integrity in accordance with laws and regulations. First, a cross-departmental and cross-regional platform for sharing information on scientific research integrity should be established, and universities and research institutions should share records of dishonest personnel to increase the cost of academic misconduct. secondly, promote the mutual recognition of the results of academic misconduct, and gradually establish a unified standard for the identification of academic misconduct on all platforms; Finally, the administrative departments of education, educational institutions, scientific research institutions and other platforms jointly introduced disciplinary measures, including the awarding of academic qualifications, job hiring, selection and commendation, scientific research projects and other aspects.

(3) R&D and management: improve robot R&D and clarify the responsibilities of managers

1. Improve robot ethics

The traditional three laws of robotics can no longer adapt to the complex reality. The first law of robots requires that robots must not harm humans, but in the face of the increasingly complex network of interests in today's society, it is difficult to answer whether the promotion of robots benefits or harms humans. For example, the use of a large number of intelligent robots in the production field has not only brought a lot of profits to investors, but also greatly reduced the prices of many goods, so that more people can enjoy better goods and services, but at the same time, it has also caused millions of people to lose their jobs and make their lives unsustainable. There are also conflicts of interest in improving research efficiency and exacerbating academic misconduct in the creation of generative AI. Some scholars have put forward the principle of new machine ethics, arguing that robots only serve to meet human needs and promote human self-improvement. The author agrees with this point of view, artificial intelligence should be used as a supplement to professionals, rather than a complete replacement for professionals, and must adhere to its status as an assistant, so it is suggested that developers should pay attention to the improvement of robot ethics, and pay special attention to refining the ethical requirements of robots, for example, in order to avoid academic misconduct caused by the use of artificial intelligence, anti-misquotation programs and anti-cheating programs can be implanted in robot programs. The "anti-miscitation" program starts when the AI cannot clearly recognize text, pictures and other information or cannot find the corresponding answer, and terminates the AI's answer. The anti-cheat program needs to start from two aspects: command recognition and machine work traces. On the one hand, for creative instructions entered by humans, once the cheating program is triggered and identified as a cheating instruction, a warning will be issued to the instruction, stop answering or prohibit the use of the robot work. On the other hand, it is also possible to add an irrevocable watermark to the robot work to facilitate the identification of the robot work and prevent humans from using generative AI for profit.

2. Clarify the responsibilities and obligations of the developer

In the absence of anti-misquotation procedures and anti-cheating procedures for robots in accordance with the law, it can be presumed that the developer has the indirect intention to indulge AI robots in carrying out student misconduct. In this case, the developer shall bear the corresponding responsibility for the academic fraud caused by the misleading user. A public apology and compensation for the damage caused to the reputation should be made, and the loss of journal manuscript fees should be compensated, and the artificial intelligence developed by the journal should be modified or destroyed to avoid the recurrence of academic fraud.

In the case that the developer has implanted a robot anti-misquotation program and an anti-cheating program, due to the low ability of the AI to identify ambiguous information and artificially encrypted information, the AI may still provide incorrect information, and the resulting academic fraud cannot be fully attributed to the developer. As long as the developer has set up anti-misquotation procedures in the usage specifications and clearly informed the user of the hidden dangers of artificial intelligence, he can avoid liability. With regard to the notification obligation of developers, the author believes that it is necessary to clarify the notification method, adopt the notification-consent model, and obtain the consent of the user. In terms of the content of the notification, it should include the technical shortcomings of artificial intelligence, the risk of academic misconduct that may be caused, and misleading information. After fulfilling the notification task and obtaining consent, the responsibility for subsequent academic misconduct caused by the AI can be exempted, and vice versa, the same liability as in the first case is required.

(4) Improve the system for detecting academic misconduct

1. Timely and complete unified database

At present, there is a lack of information sharing among various academic misconduct detection systems, which leads to some dishonest people taking advantage of loopholes and obtaining academic integrity reports. In order to make up for this deficiency as soon as possible and promote the improvement of the data of each detection system, it is recommended that the detection systems negotiate amicably and build a unified data sharing platform to improve the individual database information and make up for the detection loopholes. On the other hand, it is also necessary to ensure the timeliness of the detection system, and it is necessary to keep close contact with major publishing houses, journals and magazine editorial departments to create an information platform between the detection system and the publishing house, and update the published literature in a timely manner. Therefore, it is recommended that the detection system reach friendly consultation with various publishing departments to build stable and long-term cooperation, so as to ensure the timely update of data. Complete and up-to-date databases.

2. Add chart similarity detection function

In article similarity detection, image similarity detection is often the most difficult, because no software or algorithm can accurately analyze the similarity of two images, especially under the premise that the author deliberately modifies it. Image similarity detection is a method of visual data content tracking that is capable of detecting manipulations and improving aspects of data source detection. In an image similarity retrieval competition organized by Coggle Data Science, the participants learned the depth metrics of the program code and used the local-global matching strategy, so that the program could find the same image as the 50K query image among 1 million reference images, and the matching accuracy was extremely high. At present, CNKI only has an image retrieval function but no image similarity detection function, which makes it difficult to detect the plagiarism of paper images. Therefore, it is necessary to upgrade the literature citation recognition function and develop the chart recognition technology as soon as possible to retrieve the literature data in the scope. It is suggested that the academic misconduct detection system should research and develop the chart similarity detection function as soon as possible to further prevent artificial intelligence plagiarism.

3. Use AI technology to establish an intelligent detection system

Therefore, it is also necessary to increase the detection of the watermark of the robot work in the detection of the work, and once it is found that the article uploaded by the scholar is an artificial intelligence work, a warning will be issued, and it will not pass the similarity test. If the author appeals after failing to pass the similarity test, artificial intelligence conversational detection can be carried out, and the artificial intelligence will ask the author about the content and data of the paper, and judge whether the work is written by the author himself based on the author's response.

epilogue

Due to the existence of "black box algorithms", artificial intelligence is difficult to evaluate and supervise in many fields, resulting in data leakage, automatic driving out of control and other problems. In terms of the challenge of AI generation to academic integrity, the protection of AI generation should be strengthened, and at the same time, more emphasis should be placed on algorithm disclosure and permission consent, and human evaluation and supervision should be strengthened at this stage, so as to guide the process of AI creation with human moral values. Human beings are an end rather than a tool, and in the face of such powerful creative capabilities of artificial intelligence, it is necessary to guide the development and use of artificial intelligence with the ethical purpose of human beings. It is believed that in the near future, after human beings officially enter the era of super-artificial intelligence, artificial intelligence robots are expected to obtain the qualification of legal subjects, and the problems brought by them are also expected to be solved perfectly.

Generative AI such as ChatGPT challenges and responses to academic integrity

Read on