Author丨Chen Luyi
Editor丨Cen Feng
When GPT was just born a few years ago and large models had not yet become the mainstream direction of AI, a group of scientists had begun to explore the potential of large language models in scientific research.
The first papers in this direction came from the field of bioinformatics, which researchers began to try to use artificial intelligence technology to help scientific research as early as the "big bang" of bioinformatics in the 90s of the last century, and today, they are once again the trendsetters in this frontier direction.
In the past few years, as an important branch of artificial intelligence, large model technology has moved from the esoteric temple of theoretical research to the vast world of practical application, and from the enclosed space of the laboratory to all corners of our daily life. In the fields of bioinformatics, materials science, and drug discovery, large model technology is playing an increasingly important role.
近期,美国密苏里大学电子工程和计算机科学系的校董讲座教授许东教授的一篇论文《Iterative Prompt Refinement for Mining Gene Relationships from ChatGPT》上线期刊《International Journal of Artificial Intelligence and Robotics Research》(IJAIRR)。 该论文聚焦于大语言模型在生物信息学领域的应用。 主要研究了如何利用大型语言模型(如ChatGPT)来挖掘基因关系,并提出了一种迭代提示优化技术来提高预测基因关系的准确性。
Focusing on the new ideas of bioinformatics researchers using ChatGPT to improve workflow and work efficiency, the online roundtable forum on "Application and Potential of Large Language Models and Prompt Learning in Science and Technology R&D" held by Leifeng.com brought together Xu Dong, Trustee Chair Professor of the Department of Electrical Engineering and Computer Science at the University of Missouri, Hu Gangqing, Assistant Professor of the Department of Microbiology, Immunology and Cell Biology at West Virginia University, Xia Chun, Co-founder of TSVC, and LifeMine Chief Data Officer Yu Lihua and other industry experts. Their insights cover everything from basic theoretical research to industrial application, providing us with a comprehensive and in-depth perspective on the latest developments and future trends in this field.
Professor Xu Dong emphasized the four stages of the development of large language models in the history of machine learning, namely from feature engineering, architecture engineering, to target engineering, and finally to prompt engineering. He also discussed the application of large models in multimodal research, efficiency improvement, and exploration of new architectures, raised the challenges of large models in data security and privacy protection, and emphasized the potential of large models in the field of education, and how it can help students and researchers learn and research more effectively.
As one of the earliest scholars to study the application of ChatGPT, Professor Hu Gangqing discussed the application of large models in scientific research, especially in the fields of bioinformatics and medical informatics, and shared the potential of large models in cross-domain applications, such as the ability to simulate multiple expert roles in medical cases, emphasizing the ability of large models to understand and generate accurate answers, and how to improve their performance by optimizing prompts.
Mr. Xia Chun analyzed the business value of the large model from the perspective of investment, including the potential to improve efficiency and create new business opportunities, discussed the application of the large model in the field of financial technology, especially the potential of customer service and data analysis, and proposed the wide range of impacts that the large model may have on society and the workplace, including new job opportunities and changes to education.
Professor Yu Lihua shared the application of large models in the field of biopharmaceuticals, especially in drug discovery and single-cell data analysis, discussed how large models can help scientists and researchers conduct research and discovery more effectively, emphasized the importance of large models in data security and privacy protection, and how to solve these challenges through technical means.
The following is the full text of this roundtable dialogue, limited by space, AI Technology Review has been edited without changing the original meaning:
01 Background information and guest introduction
Dong Xu: Hello everyone, and welcome to this forum on the potential of large language models and prompt learning in scientific and technological research and development. First of all, I would like to thank Leifeng.com, GAIR Live and AI Technology Review for providing the platform. I'm Dong Xu, from the Department of Electrical Engineering and Computer Science at the University of Missouri, and my main research interests are bioinformatics and artificial intelligence. Today we have four guests, two from academia and two from industry. Professor Hu Gangqing and I are both academics, and we both graduated from Peking University. He is currently an assistant professor in the Department of Microbial Immunology and Cell Biology at West Virginia University, and he was one of the first scholars to study the application of ChatGPT, and they co-authored the first paper on ChatGPT in bioinformatics research. Two of our industry guests graduated from Tsinghua University, starting with Mr. Xia Chun, who is the co-founder of TSVC, a Silicon Valley fund. TSVC is very well-known in the United States, investing in many companies like Zoom, successfully incubating 9 unicorn companies. The other is Ms. Yu Lihua, who is the Chief Data Officer of LifeMine and has worked as an executive in several biopharmaceutical companies, with a strong focus on large models and machine learning. We are delighted to have four guests here today to discuss this topic. Let me start with a brief introduction. First of all, large models are a continuous development in the history of machine learning. Machine learning can be divided into four stages: early machine learning, which we call feature engineering, mainly extracts features by manual means, such as SVM or LightGBM. The second stage is architecture engineering, which is mainly classical neural networks, such as convolutional neural networks, which can directly use the original features for machine learning. The third stage is target engineering, where there are a large number of pre-trained models, such as Bert, etc., that people use to adapt to various applications. The fourth stage is cue engineering, which builds on the big model and manipulates the big model through various cues. The large model itself is like a black box and can use Zero-shot technology to make predictions even without guidance, or to provide a small number of examples that are nowhere near enough to constitute large-scale training samples. If we compare with industrial concepts, large models are like circuits, which have experienced the development of electron tubes, transistors, and integrated circuits, and are becoming more and more powerful; While we only use a small part of them, the same chip can be used in multiple places. The large model has at least three characteristics: it can be prompted, it is suitable for a variety of downstream tasks, and it has what is known as an intelligent emergence, capable of reasoning like a human. Currently, research on large models is very active, including multimodal studies such as GPT-4o and Google's Gemini, which can use multiple languages, images, videos, etc. as input. On the other hand, there is a discussion of how to make large models faster and more efficient. At present, the large model is mainly based on the Transformer architecture, and its efficiency is squared with the word length, and the efficiency is lower when the word length is longer. There are now new architectural explorations, such as Mamba, which is able to process faster, try to make the amount of computation linearly correlated, and can process up to 1 million bytes. Finally, a large language model is an operating system in its own right, and you can do a lot of things on it. There's also the so-called AI agent, which I'll talk about later. Large models can use a variety of inputs and then produce a variety of outputs. The large model itself can be seen as a black box, and although the black box was often used to criticize deep learning in its early years, it actually has many advantages. The phone we use is a black box, we don't know how the electronic components are laid out, we just communicate with its interface. The same goes for large models. This kind of communication is the so-called prompt engineering, and the prompts are divided into hard prompts and soft prompts, and hard prompts, such as the conversations specified by ChatGPT, are one-way and cannot be corrected; Soft hints are vectors of parameters that can be learned. TipLearning a large model has many advantages over an optimized fine-tuned model, especially the fine-tuning model may forget what was before, which is known as catastrophic forgetting. Tip: Learning performs better in this area. Therefore, with prompted learning, you no longer need a large amount of data, a large model, or a lot of computing power, but only a relatively small amount of data, models, and computing power, and you can carry out a variety of applications. For example, this is a diagram made by Mr. Hu Gangqing, and it can be seen that the application range of large models is very wide, from genes to education, programming, image understanding, pharmaceuticals and other fields. The language models we use are different from those used by the average person. The average person sees it more as a chatting tool, while scientific research applications rely more on models as knowledge graphs and inferences. For example, we use large models to mine genetic relationships and biological pathways, among other things.
Our work was published in the International Journal of Artificial Intelligence and Robotics Research, and I would like to recommend this journal, which encourages the publication of interdisciplinary content, which is very cutting-edge and intersecting. Therefore, we chose to publish here first. Our research leverages ChatGPT's reasoning power, i.e., how to optimize prompts. We found that ChatGPT can optimize itself. For example, when looking for genetic relationships, we first design some prompts and then run them on big data to see the results. Next, we change the prompts and run them again, inputting each prompt and its results into GPT for optimization. By iterating over and over again, we ended up with a very effective prompt. For example, if we change "biopsychologist" to a molecular biologist who specializes in gene interactions, this simple modification can significantly improve the efficiency of knowledge mining. We only mentioned "activation" and "inhibition" before, and they added parentheses such as "gene 1 activates gene 2", and while this may seem redundant to us, such a small change does make ChatGPT understand our purpose. The main purpose of this is to make it understand our intent, and we also use more sophisticated methods to optimize the prompt. The first time I heard about this approach was when I heard someone tell how hackers of large language models can mine information that shouldn't be mined. They used a technique of preceding the prompt with a piece of garbled ASCII code, but large language models can do embedding, and hackers use this to inject the prompt and mine information that shouldn't be mined. When I heard about this method, it immediately occurred to me that since this method can be used to do bad things, it should also be used to do good things. Therefore, we use this method for optimization to improve the efficiency of knowledge mining. This method is really effective. Large language models can also understand images, such as GPT-4v, you can give it images, and it can understand the genetic relationships in it. For example, this is a work of Professor Hu Gangqing, they trained GPT to recognize melanoma images by giving it some examples, and then it can judge whether the given image is from a normal person or a melanoma patient. Large language models can also do a lot of things, such as protein structure prediction, which is achieved by converting proteins into language that large models can understand. Proteins are like strings, similar to language, so they can be processed in a linguistic way. Large models can also be used for single-cell data analysis, which is also a language. Single-cell data is to measure the expression of tens of thousands of genes in each cell, you can imagine that there are tens of thousands of genes in each cell, like a sentence, if there are 1 million cells, that's 1 million words, and large models can effectively analyze these data. In addition, large models can also be used for material design, such as Automat Solutions, a start-up in Silicon Valley, where I work as a consultant, and their job is to start with large models, automatically search for requirements, such as battery formulations, and then find relevant articles, do data collection, and even design battery formulations.
A similar approach can be taken in materials science. At present, the most concerned in the field of artificial intelligence is AI Agents. If you ask which AI technology is the most advanced in 2024, it is AI Agents. The concept of AI Agents is to simulate how humans handle complex tasks. For example, when conducting data analysis, humans need to use multiple tools and refer to a large number of documents. The AI Agent mimics the process by which a human performs these tasks, can be automated, and intelligently adapts to a variety of tasks and patterns based on needs and context. Currently, there are two well-known architectures, one is LangChain and the other is CrewAI. We have used these methods to achieve remarkable results in the field of bioinformatics. I've done a brief overview, and now we can start a formal discussion. First of all, I asked a few panelists to briefly introduce themselves and why they are interested in this topic. Mr. Hu, you start first. Hu Gangqing: Thank you, Mr. Xu Dong, for your systematic review and introduction. I'm Gangqing Hu, an assistant professor in the Department of Immunomicrobiology and Cell Biology at West Virginia University. Since the beginning of last year, I have been focusing on and practicing how to apply GPT to innovative applications in bioinformatics and medical informatics. This is an area that I am very focused on at the moment and what I have been working on, thank you. Xia Chun: Thank you, Mr. Xu Dong and Leifeng.com, for providing this exchange opportunity. I'm Xia Chun and I'm from TSVC, a dollar fund in Silicon Valley. We have been established since 2010 with the goal of investing in and nurturing unicorn companies. We invested in AI very early, in 2011, when there were no big models. Now, we see AI as an important megatrend, just like the internet back then. We want to seize this opportunity to learn as much as we can. Today is mainly here to learn. Yu Lihua: Thank you, Professor Xu, and Lei Feng. I'm Lihua Yu and I'm currently the Chief Data Officer of a biotech company. I have been working in bioinformatics for more than 20 years, working in pharmaceutical and biotechnology companies. My main motivation is to discover new drugs to help patients. As a Chief Data Officer, my career is driven by how to use data and technology to improve efficiency in the pharmaceutical industry, which is a long, complex, and costly industry. I think there's a lot of technology and data in this industry that can make a big difference compared to the engineering field. In my more than 20-year career, I've seen every new technology impact the industry has been. AI is no exception, and I was exposed to early AI, including perceptrons, which are the precursors of convolutional neural networks, when I was a master's student at Tsinghua University. From then until now, AI has begun to have a real impact. Large language models like ChatGPT, as introduced by Mr. Xu Dong, have a wide range of applications. From my point of view, how to maximize its application potential and improve the efficiency of our work is the focus of my attention. Like Mr. Xia, I came to learn from Mr. Xu Dong and Professor Hu.
02 Large Model and Prompt Learning: Techniques, Application Methods, and Advantages and Weaknesses
Dong Xu: Okay, we've already discussed some big models and practical application methods. Mr. Hu, can you tell us more about other methods? Hu Gangqing: In your introduction, Mr. Xu has summarized some methods of prompt learning, such as role prompts, which are the methods that GPT needs to specify roles in the early days, and now GPT may be able to self-identify characters. There are also Chain of Thoughts and Tree of Thoughts, as well as some commonly used prompting strategies. Of course, what we are doing more at present is contextual cue learning. The user provides a description, and the AI mines the appropriate context in a specific knowledge base based on that description, and then combines that context with the user's prompt. The large language model analyzes this information comprehensively to make its responses more relevant to the questions asked by the user. Dong Xu: You're right, especially in the retrieval-augmented generation (RAG) model, this approach is really useful because it can extract data information based on the problem and process it in combination with the large model. Now, prompt engineering has become a high-end career field. Many people, even undergraduates, can earn a high annual salary of $200,000 as long as they are proficient in these skills, because there is a huge shortage of such talents. There are dozens of techniques for prompting learning, and I sometimes watch related videos curiously to learn about all kinds of exotic techniques. This field is worth digging deeper into because of the growing range of applications. Mr. Xia and Mr. Yu, do you have any additions or ideas? Yu Lihua: I'm more interested in application scenarios, so I don't have anything to add to the two of you. Xia Chun: I've been paying attention to the changes in technology because technology is developing very fast. We've invested in a few projects before, but the advent of OpenAI has affected our investment. Therefore, it is very important to keep track of technological developments, which not only affect scientific research, but also affect all walks of life, bringing us many new investment opportunities. Xu Dong: Indeed, there are both opportunities and challenges. Large, general-purpose models such as ChatGPT are capable of completing many specialized tasks, which has led many people to find that their jobs have been replaced by AI and that they are doing it better. Technology is really changing with each passing day, and I see a different new technology every month. Hu Gangqing: Not only that, but GPT itself learns a lot because there are now multiple large language models. For new areas that we are interested in, there were previously large barriers between fields, but with GPT, these barriers have become even smaller. It makes it easier for us to enter a whole new field, such as image processing or diagnostics that I haven't done before. Now we can adapt faster and don't need to spend years slowly accumulating like before. In this way, we can also learn more about the development of emerging fields. Dong Xu: Yes, so we naturally moved on to the next topic, discussing the advantages and disadvantages of large models and prompt learning. Mr. Hu, let's talk about it first. Hu Gangqing: Let me talk briefly about it first. First of all, traditional machine learning requires massive amounts of data, and through deep learning or convolutional neural networks, it is possible to improve the accuracy to a very high level, such as image diagnosis, even more than the level of diagnosis by doctors. There are many products, such as Radiology, that actually do this. Second, traditional deep learning methods are very accurate, but in the medical field, patients often want to be explained. For example, if a patient is told they have cancer, they may ask why. This is a bottleneck that traditional AI needs to break through in terms of interpretation. GPT, on the other hand, learns a lot of information, and if it is applied to a new scenario and the prompts are cleverly designed, it may only need a few examples to understand the problem and give a satisfactory result. This shows that large language models have a low demand for data volume on the basis of prompt learning, which allows it to solve the diagnosis of some rare diseases, because the training data of these diseases is not enough for traditional deep learning or other AI methods, but GPT has a certain possibility to achieve breakthroughs. The second point is the interpretability of GPT, as it is able to have a conversation like a human and provide the explanation behind the predictions. While these explanations are not necessarily correct, they can serve as a basis for a second opinion on which the doctor can communicate and correct. I propose these two points: low data volume requirements and interpretability, so GPT and traditional deep learning are complementary. Xu Dong: You said it very well, I would like to add something about prompt learning. We have recently used prompted learning methods in a range of work, including single-cell data analysis, protein analysis, and medical image processing. A significant advantage of prompt learning is that it can use very little data to solve problems that were previously unsolvable. For example, in medical image analysis, only 10 or 8 images may be needed to make a judgment. In protein analysis, data for some special problems is very difficult to obtain, such as signal peptides, which may have less than 10 samples, which was not possible in the past, but can now be solved using large models using prompt learning. But we found that this model also has limitations, and the representativeness of the data is very important. If the sample size is limited and the distribution is skewed, the results can be problematic and it is difficult to verify their accuracy. Xu Dong: No one method can completely replace the previous method, so there is a so-called "no free lunch" theorem in the field of technology, which means that while there are advantages in one area, there may be sacrifices in other aspects. This is still the case with machine learning and deep learning until now. Yu Lihua: As an engineer by training, I have been working with biologists for many years, especially in the field of bioinformatics. For a long time, I thought that biology lacked the concise and unified mathematical language of engineering, and instead relied on a large amount of knowledge. This makes high quality and reproducibility in this area difficult for a long time. But I think the emergence of large models, whether large language models or multimodal models, has the potential to become the mathematical language of biology. It's hard for us to imagine biology having a simple mathematical expression, but large models can bring together proteins, genes, and other information to provide us with a unified framework for understanding biological problems, which is a great help in the work and could be a leap forward in the field of biology. From this point of view, the application scenarios of large models are very wide. Hu Gangqing: The answer to the same question may be different every time, because it is a question of probability, and how likely is it to give the correct result. For example, I designed a set of prompts that the probability of getting the correct result might be 0.9, but the model might be updated to 0.8 or 0.95. This inconsistency and instability is something to be aware of. Mr. Xu Dong, you mentioned an Ensemble Learning to me before. Xu Dong: This Chinese is called ensemble learning, and physical translation is called ensemble, that is, combining multiple models to integrate reasoning, which can indeed improve stability. But I would like to add that the discovery in biology, this freedom, actually helps to build new hypotheses. For example, large models sometimes hallucinate, which is known as false positives in calculations, is a common phenomenon. But it has been suggested that if this illusion is completely eliminated, the model will be uncreative. So, for it to be creative, you have to allow it to make mistakes, to allow it to have wild ideas. The two are not to be eliminated completely, but to find the best balance. Yu Lihua: It's the same as people, it requires creativity, and sometimes it needs to break through the boundaries of the known and go wild. Xu Dong: Another point that I think is very good that Mr. Yu just mentioned is that large models can make cross-border assumptions. Large language models do something that has not been done in human history in the past, which is universal embedding, that is, projecting knowledge from different domains into the same space, in which the relationship between knowledge becomes very obvious and not as complex as in the real world. This is achieved through a learning process and then being able to do a lot of things effectively. So I think that's where the big model is really powerful, and it's kind of able to do a little bit of everything. Xia Chun: I feel the same way. I think the most exciting thing about large language models is that a person reads 1,000 PhDs in different fields and then integrates them internally. I'm interested in whether there is a project that can harness this and lead it to integrate different disciplines and produce some particularly interesting new disciplines or new discoveries. Especially in terms of generation, whether it is in research or other fields, it will be completely different from the past. I'll give you an example, we are in R&D, and at the same time there is an expert doing a feasibility analysis of manufacturing, and a financial expert is doing a financial analysis, and even considering the impact on the stock market in terms of finance. It's very exciting that these are completely different areas of work, and that large language models can be done at the same time. Yu Lihua: Mr. Xia said this very well. Our bio-informatics field is the equivalent of an early interdisciplinary effort to bring computing and biology together. I myself lead a team in the pharmaceutical field and have always been interested in how to train the talent we need in academia. I've found that the most effective people are those who can get in touch with others. Therefore, I think Mr. Xia said this point very well, that is, the most effective talents are actually the best able to combine interdisciplinary things. Large language models already have this capability, and how we can mine this capability is a key question. Xia Chun: Yes, of course, we also hope that Professor Xu and Professor Hu can do research, for example, through Agent, so that they can expand in this area independently. On the one hand, we guide it through prompts, stimulate its interdisciplinary ability, and expand a problem very broadly. Of course, I believe that AI itself can do the same. In this way, we are closer to AGI (Artificial General Intelligence), or more powerful AGI may appear. Hu Gangqing: In response to what Mr. Xia just mentioned, GPT, a large language model, is cross-domain. Recently, Mr. Xu and I have developed a job to explore the potential of GPT. We found a very interesting phenomenon: GPT can play multiple roles in the same conversation. We applied it to a medical case where a patient came to the clinic, described his symptoms, and ended with a simple question: What kind of vitamin is the patient missing? This is actually a very challenging question for the United States medical professional license. We found that answering this question requires a combination of dermatology, gastroenterology, and dietitianism. Let's let GPT simulate the discussion of three experts and then give the answer. We found that if we asked GPT to mimic only one of the experts, it had about a 50% chance of answering correctly. But if you ask it to imitate these three experts at the same time and then give an answer, the probability of it answering correctly increases to 80%. Dong Xu: To add up, large language models are actually moving in two directions. One is the direction of the 1,000 PhDs just mentioned, that is, to develop towards breadth. Now there is another model that is going deeper, some people call it a small language model, and it is very specialized. For example, there might be a domain-specific model that has all the training data at the textbook level, rather than using all sorts of data from the web like today's large language models. This model can be developed in great depth. Recently, I heard from a friend that when I went back to my home country for dinner, the restaurant provided services according to the direction discussed by the guests. If you have a better language model, you might be able to do better. While the existing way is good, the Song words written by the model may be a little cheesy. If the model is trained more deeply, it may be close to the level of Su Shi and others. So, another pattern is the small, deep pattern. Xia Chun: This topic is particularly worth exploring because there are more opportunities like this from an investment and entrepreneurship perspective. We have invested in some projects with very small models, such as 7B, which is a little smaller, and it is very specialized, such as becoming the most professional security expert, a new generation of products that are more powerful than the previous security products. We have also invested in a company called Aizip, which focuses on making small models, or even micro models, this model may be too big for 7B, and it will be as small as how many Megabytes can be placed, so that its impact is that first of all, it can be made very cheap, and it can be used on a very low-cost microcontroller, so that many devices have AI capabilities, and the range of applications is very wide, especially in the field of Internet of Things. In addition, it consumes very little energy, which is a big advantage for the problem of high energy consumption when running large models nowadays. So, the big model has the advantages of the big model, and the small model has the advantage of the small model, and when combined, it is possible to connect them in series through the Agent, which I personally think may be more exciting than the concept of AGI.
03 Practical Applications: Cases, Data Security and Privacy Protection
Xu Dong: Very good. The next topic is practical application cases in various industries. Mr. Xia has given a good example, see if you have more interesting or enlightening examples, whether it is in biomedical science or other fields, such as engineering applications, can you share further? Xia Chun: Let me share one first. I find particularly interesting the case of Automat Solutions, which Professor Xu co-supervised, which is one of our earlier AI projects for battery electrolyte material development. From our analysis, it is 100 times more efficient in R&D, mainly because it deals with formulation problems, and the search space for chemical formulations is very large. If traditional experimental methods are used, it can take a long time to find the right formulation. Therefore, we asked Professor Xu how to use AI methods, including reinforcement learning, to deal with small data sets and predict and generate new recipes. Xia Chun: Another important point is that it has to be a closed-loop system, and it needs to have high-throughput laboratory automation capabilities. With lab robots and automation systems in place, we can find that this kind of R&D is 100 times more efficient than manual work, which is very exciting. Xia Chun: We have also invested in a similar project on the synthesis of hydrogen energy catalysts, which can be used to synthesize new formulations with AI to replace rare metals. Xu Dong: The examples mentioned by Mr. Xia are very good. I also like automat as a company, and the way they use AI is very successful. The biggest experience I learned from this is that more business opportunities may lie in in-depth application rather than generic models. Generic models are important, but the competition in this field is too fierce. Xu Dong: We often hear that there are hundreds of models competing, but not many of them survive in the end, because just like chips, you don't need many people to do it, and one model can meet the demand. But there are many opportunities in all walks of life, and I think the example of automat shows that for AI or large models to be successfully implemented, three people are needed: on the one hand, experts in the battery itself (specialized field), on the other hand, experts in high-throughput technology, and thirdly, experts in artificial intelligence. Only when the three are combined can it be done well. In terms of scientific research, AI itself needs the combination of these technologies and domain knowledge, such as the combination of biotechnology and artificial intelligence, and I think only in this way can we go deeper. Hu Gangqing: I agree with Mr. Xu and Mr. Xia. In the application of bioinformatics, the two most crowded tracks in the early days were text mining and drug discovery. The ability of text mining is very strong, and it can mine the relationship between genes, such as the work of Mr. Xu Dong. There are also many applications in drug discovery, such as the association between food, drugs, and disease, drug-drug interactions, or the optimization of small molecules and drugs. These are all areas that were done a lot in the early days, and they are still being done now, and they are getting deeper and deeper. Xu Dong: These are all good examples. Next, let's talk about the next topic, which is data security and privacy protection in large models and prompt learning. Probably many people have noticed that if you upload data, such as company confidential data or patient privacy data, and is used by large models to learn, it is not only the big language companies (such as OpenAI) that may know about this data, but others may also mine it. About a year ago, executives at Samsung used ChatGPT to ask questions and uploaded some of the company's data, which could have been mined even if it wasn't related to the company, but could be mined through specific prompts. This poses a lot of data security and privacy protection issues. Mr. Hu has done some work in this regard, can Mr. Hu share it. Hu Gangqing: The work I do is relatively simple, but I think there are actually two aspects to data security. The first aspect is user data, and users are worried that after the data is uploaded, the large language model interface has clearly told that it will not be trained with data. We assume it won't be used for training, but you don't know if the data will be mined after it's uploaded. From a developer's point of view, for example, we build on a large language model and add our own knowledge base, which we have invested a lot of effort in, and we don't want others to know about it. But a lot of large language models are based on prompts, which are a very powerful tool that, if used correctly, can potentially tap into these knowledge bases. So for developers, they are also very concerned about data security. It's like antivirus and antivirus files, it's always an iterative process. For large language models, I also make an analogy, which also has anti-virus and anti-anti-virus issues. Xia Chun: I'll also introduce some interesting projects that we've seen recently, and we've recently seen a project that focuses on security. They're very concerned about the prompt word issue that we're talking about today, because now they're specifically looking at how to use prompt words for attacks. In addition to mining your original learning data, it may also destroy your original training model to a certain extent, producing malicious results. So these things are already being studied, and as an offensive and defensive one, we must also try to prevent them. According to them, it is still more or less tinkering now, and there are still some ways to control it to a certain extent, at least to be able to call the police. But this market is actually very dynamic, because the model itself changes very quickly from month to month, and some of the original security vulnerabilities may not exist after a while, or new ones may appear. Users also use a lot of different things, because they use RAG for example, they add a lot of content, and they actually bring a lot of new security risks. These boundaries have become very blurred, so you can see that it's a very dynamic and volatile market, which is basically the case. Xu Dong: Yes, this is very similar to cyber security in the past, which is a process of continuous attack and defense. In the field of cybersecurity, there are so-called red teams and blue teams, and there is constant competition online. The same is true for the introduction of large models, which are now being used by more and more people for malicious operations. For example, in the simplest cases, it is possible to inject an opinion or influence a vote, etc. Although this has little to do with scientific research, the most important concern in scientific research is the possibility of intellectual property rights being lost. For example, when using ChatGPT, the data may be used for learning, and others may also mine that data. Now ChatGPT offers a local version that can be used in the intranet so that all the data is not leaked. This is certainly a relatively safe way to at least not lose data. But the average app does face privacy issues. For example, hospitals are now reluctant to use ChatGPT because there is no guarantee of privacy security after a patient's data is uploaded to ChatGPT. These may be questions that require further study. Teacher Yu, do you have anything to add? Yu Lihua: I don't have anything to add, just one question. I don't know if our discussion is a bit like the early days of cloud computing. In the past, everyone had their own servers and storage, relying on the security capabilities of the IT department. When everyone started moving into cloud computing, a lot of people were worried that it wasn't safe and that the data wasn't behind the firewall. But then, as cloud computing became mainstream, the responsibility for security shifted to cloud service companies, who kept us safe. So I also think that this is probably the case with large language models, and it's almost impossible not to use them. So will the protection of security and privacy continue to improve at a higher level, rather than just relying on individuals not to transmit sensitive data. Cloud service companies may begin to offer a certain level of security. Xia Chun: I believe this must be a development path. In addition, now some small models have also appeared, and the hardware used for edge computing is getting cheaper and cheaper, and some models that are not too large can be run, so it is completely possible to use private models locally, which also alleviates the security problem to a certain extent. Xu Dong: Yes, I would like to say two words as well. I think big companies, like OpenAI, actually do a lot of work. For example, the prompts used to mine knowledge in the early years are not very easy to use now, because OpenAI has blocked a lot of these attacks, so it has a certain security. Of course, as I said earlier, these techniques used by hackers are sometimes really creative, for example, they may perturbate the model with gibberish to make it spit out something useful. These things are really amazing, I don't know how they came up with it, but in a sense, it's kind of innovative. But once this innovation is discovered, there will be defenses, so it will be constantly iterating. Hu Gangqing: I would like to say that the attack and defense of large language models may be different from the situation of online antivirus software development in the past. If you're going to write hacking code, you first need to know how to program and even know assembly language. But in large language models, prompts may lower the barrier to entry. Conversely, it can be more challenging to protect against large language models or data than traditional antivirus software. Sometimes, as Mr. Xu said, with a piece of garbled characters, you may not know how to think of it. Or a very weird sentence that can dig up the prompt you've worked so hard to build from your backend, and there are examples of that. Dong Xu: Theoretically, a large model has a lot of parameters, which means that its capacity is very large, so almost all of the data you train can be mined intact. This is unlike machine learning models in the past, where you can't restore the training data after training is complete. Large models are likely to restore most of the data, so the trained data faces this problem, so it is relatively severe and more challenging than traditional machine learning.
04 The business value of the model and the impact of the social workplace
Xu Dong: Let's talk about the next question, about business, mainly to hear the views of Mr. Xia and Mr. Yu. First of all, we have heard different opinions about the application of large models in business, whether it is scientific research or commercial value creation. One way to say this is that large models like AI Agent have been widely used in Fortune 500 companies around the world, greatly improving efficiency. Of course, there are also people who think that AI looks dazzling, but in fact, it does not create much value, but more on the surface. I would like to hear what Mr. Xia and Mr. Yu have to say on this issue. Xia Chun: I'm very interested in this issue, and I've been thinking about what good investment opportunities we have. So first of all, I think it's not living up to everyone's expectations right now because there's a big AI bubble, a bit of overhype. The hype is characterized by over-promises, always feeling that intelligence has developed to the point where it is going to destroy humanity, which is too exaggerated. I think these are all nonsense. Therefore, many expectations are beyond reality, or do not match reality, and it can only be said that our current practice has not been done well, but it will definitely be very useful in the future. We have thought carefully about what kind of business value it will create and what kind of path it will take. On this path, we just wait for it and see how it monetizes. The first wave, if we look at the history of technological development, the first wave is almost all about improving productivity, first making tools. Obviously, we are also using ChatGPT now to help with manuscript writing and correct typos and grammatical errors in English, which is a straightforward use. The second wave, which I think is being done right now, is what we're seeing a lot of companies doing is Copilot. At present, there is no ability to completely replace people, but in the vertical field, it is possible to train a particularly accurate, high-quality model, and then make Copilot. Of course, we've seen examples of writing code, and there's a lot more that can be done. We can also understand that this kind of Copilot can be used for so-called embedded intelligence, combined with robots, which is also quite magical, but it is not so intelligent that it can completely replace people. It's a category, and we think that's a second phase thing. The third stage is that the agent has been used relatively well, then in fact it has reached a lot of intelligence, not only one model is working, it may be that multiple models are better organized through the agent, so that the degree of automation and intelligence for a class of problems is relatively high. For example, what we are developing faster and what we are doing more carefully is after-sales service and customer service, that is, the CRM field. If there is more capital and more practitioners in this field, the development will be faster. You may have had your own experience, such as dealing with an insurance company or a hospital, and you'll find that intelligence in this area improves very quickly because it brings together a lot of capabilities. We've also seen some of the robotics projects that we've invested in, such as applications in agriculture, which also put together integrated intelligence capabilities. In terms of business value, it's really valuable, and it's easy to make money, because it's starting to replace manpower to a large extent, and solve the labor shortage, especially for jobs that are not very easy to recruit. However, it should be noted that often those manual and blue-collar jobs are actually much more difficult than we think, but white-collar jobs, including scientific research, PhD, etc., may be easier to be replaced by AI. We predict that the next stage of the fourth or fifth stage is that it will gradually evolve from a tool, which is an evolutionary process, and eventually evolves into a new ecology. Because there are so many agents, they can do so many things that are beyond our imagination today. As for what they will do, such as what they will do in scientific research, we cannot predict now, we can only answer through history. I like to quote the history of printing, for example, in the West, Gutenberg invented the printing press, which he first used to print the Bible, and the development of this technology led to the rise of Protestantism. In the beginning, the printing press was used as a tool of efficiency, replacing the priests who copied the Bible in the church, making it possible to print the Bible on a large scale. But the eventual paradigm we are talking about, the paradigm development that will eventually lead to a new ecology, is something that no one expected, such as the newspapers, the media, and the journalism industry. At first, people didn't think of that, but what we didn't expect was, for example, the newspaper industry, where we went to Hearst Castle and saw his wealth, and he amassed a great fortune and influence through the newspaper industry. The impact of these things on society is profound. We may not see it today, but we can speculate and deduce, and these are precisely the most exciting. Including the technology we discussed today, once AI plays a role in scientific research, there is a lot of room for imagination about what the future of scientific research will look like. We also expect that the emergence of such an ecology will have a subversive impact on the scientific research community and culture. Yu Lihua: First of all, I think that the current large language models and AI are actually a bit like infrastructure, such as electricity, the Internet, and cloud computing. While they may initially have limited applications, because they are infrastructure, we can't quite imagine what can be built on top of them. It's still too early to see the impact of science, but I can share some of our personal experiences. In the field of bioinformatics, I think AI and large language models today are a bit like when deep sequencing first came out, when most people were still using microarray technology for gene expression analysis. At the time, people weren't sure if deep sequencing should be done because it was expensive and lacked analytical talent. But apparently after a few years, this is no longer an issue, and the speed, precision, and popularity of deep sequencing have made it a very fundamental technology and have had a huge impact on the biomedical and pharmaceutical fields. So I think the landing of large language models in our field is more like in the early days of deep sequencing, some people started to use it, some people hesitated, and some people even thought that it might not really be the technology needed, but I think its impact will definitely get bigger and bigger. Specifically, our own example, because my team is data scientists, we all know that a big part of the job of the average data scientist is to clean and align the data, and the data can only be analyzed after it has been aligned. Then I think that with a tool like Copilot, including the direct use of ChatGPT, a lot of the groundwork can be done with very little effort. Those who are insightful, know how to use, explain, understand and translate it into knowledge and next steps of work will be more valuable. I think it's going to have a big impact on the industry and definitely a huge increase in efficiency. I think there's going to be a lot of value in who uses it well and which organization can really integrate it into internal work. Xu Dong: You have all said very well, and I think two points are very important. First of all, the final impact may not be imagined, because AI is still in an explosive phase and has not yet seen it gradually cool, but continues to evolve. So its ultimate impact may be something we can't imagine right now. The second point is that the process of influence can be tortuous, not straight. The same is true for technological developments in the past, such as cloud computing. I know that the earliest concept of cloud computing was actually coming up with Oracle, and they thought that you don't need to install anything on your laptop, everything can be done in the cloud. But at the time, the concept almost bankrupted the company because the infrastructure at the time was mismatched, and the idea was probably too far ahead of its time. So I think the long-term impact of AI is absolutely huge. But this process may be tortuous, and in this process, many companies or individuals may not only not benefit, but become martyrs. Now some of the listed AI companies, especially in the field of biology, are about to go bankrupt. It's a tortuous process, but I think there will be a huge positive impact in the end. We can discuss the next topic, which is the impact of large models on the workplace. We've heard some of the arguments, such as the onslaught of programmers, where it used to take 100 people to write a program, and now maybe 20 people plus AI is enough. This is very evident in the field of programming, including computer science students now that it is much harder to find a job than in previous years. Large models may also impact other fields, such as the simple data analysts mentioned above, if they are only doing data cleaning or simple data work, including statistical work, these people may be affected to a certain extent. So in the workplace, for example, people who do big data, how to adapt to this situation, or how to transform and cope? I don't know what you think. Yu Lihua: My guidance to my team has always been to build their value on mining knowledge and insights from data and turning them into action, not just programming or computing. This is at least a relatively simple guide. But if you look at the whole industry or different industries, there will be fewer and fewer jobs that only go through the volume and do not pay attention to them. Xia Chun: I think what Mr. Yu said is very accurate, and the amount of walking is not careful. Because of the volume of things, AI does much better than people. Yu Lihua: There is no mistake yet. Xia Chun: In the future, the skill requirements for people will be different. We have also discussed the impact of education, universities even need to be redefined, many teachers may no longer be needed, and using AI as a teacher may teach better than the current teachers. These questions are all interesting. We are still exploring how it will create a new ecology or a new paradigm where the whole world will change. This change will lead to more opportunities, especially for young people, who need to be sensitive to these changes. Once change happens, a lot of new positions will emerge in the workplace. Xia Chun: As investors, we keep a close eye on this space to see what new things come out. Because after the impact of many things is huge, its market volume is there, and the market value may reach billions or even tens of billions of dollars. These are all opportunities that we are looking at. Of course, it's hard to say exactly what will happen unless we can travel through time. It's hard for us to chart the path of the future, for example, we went through the development of the Internet, going back to 1993 when we used Mosaic at UIUC, when there were only three websites in the world, and it was difficult for us to imagine what the future of the Internet would look like, including social media now, etc. Therefore, we can only keep up with the trend and seize every opportunity in the change. Xu Dong: The example you gave is very good, I was in that building at the time. That's what the fifth floor of the Beckman Institute did, and they made the first browser, MOSAIC, and it was used by the people in the building as soon as it was ready, so I was one of the first dozens of people in the world to get their hands on this browser. I thought it was good at the time, but I didn't expect it to have such a big impact. I thought at the time, this thing is very good, we will do scientific research in the future, it will be very convenient for everyone to pass on data or display scientific research results, but I didn't expect that it would bring about earth-shaking changes in the entire industry, including social media now, etc., I didn't expect it at all. So I think that's probably the case now, and while there are already a lot of apps, there's probably a lot of apps that we haven't thought of. Xia Chun: Basically, I think that's the way it is, we've gone through several rounds of historical repetition. Now we're just seeing the tip of the iceberg, just some tool-level stuff. Personally, I think it would be more interesting to be more interested in changes at the societal level. Because AI is so intelligent, many things can be calculated very clearly, including war, and now there is no need for real people to participate, all machines are operating. The impact of this change in pattern on society is enormous. Yu Lihua: Another perspective of social change is what kind of new job opportunities can be created by big models or AI. When so many white-collar jobs are freed up, there may be huge changes to the social ecology that may not be related to the big model, but are entirely new areas. Xia Chun: As for the impact of the workplace, there may be many young people in our audience who are thinking about such issues. Teacher Yu also mentioned that you should not only position yourself as a tool man, but should be more attentive. I prefer to focus more on the liberal arts, because in the future, it will probably be more people with a good liberal arts foundation who can control AI. Of course, I mean liberal arts that are very in-depth, such as sociology, anthropology, and preferably philosophy. Today it may not seem useful to look at philosophy, but in the future you may use philosophy to communicate with AI. Yu Lihua: So Mr. Xia, what do you mean is that the future of university or graduate education will return to the original purpose of university education, which is liberal arts education, rather than vocational training? Xia Chun: Yes, that's right. Because vocational training can be a bit like a scene from the sci-fi movie The Matrix, you can download it to master the skills, which is not a problem. Yu Lihua: Yes, universities or colleges were established to train social leaders and provide liberal arts education, not vocational training. Xia Chun: It's very good for the teacher to describe it like this, and I think the development of human society may go in this direction. We have gone through several rounds of changes in the Industrial Revolution, from mechanical, to electrical, to electronics, to the internet and AI. The process is very clear. I have a firm view that real progress is technological progress, and technology is real, and it brings things that were not there before, and then directly affects society. Interestingly, how do we position ourselves as individuals? Hu Gangqing: I would like to add that some of the students in the audience who are computer science or science and engineering students may ask, since AI programming is so good, do I still want to learn programming? I think learning to code is still very important for two reasons. First, programming trains the mind, how to think, which helps you better understand the thinking behind large language models. Second, if we rely entirely on machines for programming, the best programs are probably written by people who know how to program. I have an example in my lab where I asked a graduate student to help me implement an iterative algorithm, and he already had a foundation in programming. I told him about the algorithm and gave him a debugging time. After finishing in the morning, he gave me the results at noon. I asked him how he was so fast, and he told me that he wrote a framework with GPT and then modified it on top of that. If this student didn't know how to code, it might have taken a week or two to get results, but now he did it in half a day. So, on the one hand, programming can train the mind, and on the other hand, it can better collaborate with large language model AI. Xia Chun: On the basis of Professor Hu, I will play again. Maybe what Professor Hu means is that if you just learn a little programming in general, know a few languages, and want to rely on this mixed rice, it may not be easy to mix in the future. So you need to be proficient, you need to understand deeply. We're all learning this, and we know that we need to learn some lessons, such as the principles of compilation, and even the math behind it, as well as the history and design ideas of programming languages. Once you've mastered it to a certain extent, you'll feel like you're starting to progress in the direction of a master. In this case, you have a higher level of using AI tools, at least you can talk to the AI. Otherwise, you won't be able to understand what the AI is doing. Xu Dong: Yes, from an educational point of view, AI has indeed had a big impact on some professions, but at the same time, it has also created many new industry opportunities. From an educational point of view, how do we train people to take advantage of these opportunities, or how do some listeners, such as students, prepare themselves to be successful in this wave? This is different from before, for example, we need to cultivate interdisciplinary talents, such as bioinformatics, we need to study computer, statistics, machine learning, biology, etc. But with a big model, you don't necessarily have to take a few courses in each field, because the big model itself can teach you a lot. Therefore, the current talent training may be different from the past, and it does not necessarily have to be the same model. In the past, you needed to lay a good foundation in all subjects, but now it's more about how you can think about problems, just like Teacher Yu said, you don't have to go to the quantity, you have to be attentive, and you can really be good at finding problems, thinking about problems and solving problems. In the era of large models, this may be different from the way education was done in the past. So, I'm thinking about these things, and of course there are no good answers, let's see what further thoughts you guys have. Xia Chun: As I just said, study more liberal arts. Because liberal arts things should be said to be the shortcomings of large models. Don't think that large models can write poems and draw pictures now, and they are still different from people. People have a lot of inspirational things, which can't be expressed in words, and large models can't learn because it needs a token stream, and it needs a language process. So, the liberal arts content leaves us a huge amount of space. Yu Lihua: Also, from the perspective of scientific talent training, the current training focuses more on specific skills, such as bioinformatics, statistics or programming. If these skills can quickly reach a basic level on large language models, then talent development may be more about how to train scientific literacy, how to train scientific methodology, how to ask questions, and how to apply technology to solve problems. Xu Dong: Yes, that's right, and even large models may be able to help with this. For example, one of the courses at Harvard called CS 50 teaches programming, which is taught by millions of people around the world, and my son participated in the development of Copilot, their teaching assistant. Because of ChatGPT, you ask it a programming question, and it tells you the answer right away, which is equivalent to cheating. But now they train this model not to tell you the answer directly, but to inspire you, just like a real teaching assistant. So I think that the big model may not be about giving people answers, but about helping people think, and there may be a lot of potential in this regard. Hu Gangqing: Mr. Xu, the thing you mentioned seems to have been recommended by Nature last week. I saw a news opinion piece in Nature two days ago discussing the use of large language models on campus. Many schools are considering or have already introduced large language models into the classroom. It mentions the tool that Mr. Xu just mentioned, and there are probably one or two paragraphs that are particularly emphasized. If you want to consider which majors might benefit, I have two suggestions. One is a professional of hands-on operation, for example, surgery requires a feel, while GPT does not have a feel. The second is the profession of dealing with people, and I don't see the possibility of GPT replacing these fields at the moment. 05Q&A
Xu Dong: Now I have 20 minutes to answer some questions, and you can ask further questions, so let's discuss them in turn. The first question concerns the application of large models, especially in the field of science and technology, which seems to be mainly used in China and the United States. What are the applications of large models for Japanese, Germans, France and Russians? German is considered the most accurate language in the world, would it be better to apply large models in German? Hu Gangqing: The use of large language models may not be widespread, and even in schools, there are not necessarily many students who use large language models. It may be because they haven't been exposed to it, or don't know how to use it, or have an aversion to it. There is a phenomenon called AI shame, which is that people may be reluctant to use AI. Whether German is the most accurate language, and whether large models in German work better, depends on the training set. If the training set for English is more comprehensive and the training set for German is limited, then the German model will not necessarily perform better. Xu Dong: I agree with Hu Gangqing. The application of large models is indeed mainly done by China and United States, and other countries are also paying attention to the importance of this aspect. For example, Japan has introduced some large models from other countries, so it is still catching up. I'm sure these countries will eventually use it more, but they may not contribute particularly much to this model. Canada is an exception, with the exception of China and United States, Canada is very active in this area, because Canada has some big bulls, such as Joshua Bengio and others, who do very well with large models. Xu Dong: Germany and France also have some very good applications in the field of bioinformatics, such as protein models. But overall may not be particularly active. As for the language, German is indeed more rigorous, and early scientific German seems to be a must-learn. But in the era of large language models, data is king, corpus is the first, and the corpus of German is definitely not as good as English. So I don't think the big model of German is much of an advantage. The second question is, if the inference engine of the large model encounters a problem that the large model cannot understand, how can it help the large model understand our problem? Hu Gangqing: If the large model can't be understood, it may be because we haven't explained the problem clearly. So you can try to state the problem in a different way, there are many ways. For example, as mentioned in the slides, you can ask some relevant questions first, rather than the main question at the beginning. You can break down the problem first, or add some context, so that the big model may understand better. Of course, if the large model has shortcomings, some problems may indeed not be solved. Xu Dong: Yes, there are dozens of such techniques for prompting the project I just mentioned. If you're interested, you'll find plenty of videos and lessons, some of which are more than a dozen hours or dozens of hours long that are dedicated to these techniques. Of course, on the other hand, as Mr. Hu just said, we need to introduce it in segments and through training, which is different from directly using a technique, and may require a lot of machine learning training to make the model understand us. Xu Dong: When it comes to ChatGPT, it's strong in this regard. Of all the models, ChatGPT far outperforms other models in understanding problems. Let's not say how the answer itself is, if understanding the question is a lightweight mark, then ChatGPT's emotional intelligence is the highest of all large models, there is no doubt about it. Let's see what everyone has to add. Yu Lihua: I would like to add two points, one is the Meta-prompting in Mr. Xu's article, which is a good learning case that shows how to ask questions very specifically and precisely, so as to prompt the large model to give better answers. The second is that Mr. Xu used the word emotional intelligence, and when I interacted with ChatGPT, I felt that I could see it as a partner with rich knowledge but not necessarily strong professional experience. When working with such a partner, like we work with anyone, it's a process of constantly adapting how we communicate with each other, and it's like assigning tasks to get the most desirable results. Therefore, the large model can be seen as such a partner, knowledgeable but perhaps inexperienced, by adapting the interaction with it, that is, the process of meta-prompting, so that it constantly delivers more desirable results. Xu Dong: That's true, because I use ChatGPT every day to handle various things, and I find that communicating with it is actually very similar to communicating with people. For example, if you are polite to him, encourage him more, and don't think that he is a machine and can be treated coldly. You treat him like a person, like asking him what to do, and then he does a very good job, and you encourage him, thank him, and then ask questions, and you will find that the effect is really much better. This is ChatGPT, and it is simulating human behavior, including etiquette, etc. Yu Lihua: When I saw your Meta-prompting, it felt like I had recruited a recent graduate to work with no experience. Meta-prompting is like when I assign a job to him, the more specific the better, so that the newbie will be able to provide the job that meets the requirements sooner. But if someone has worked with me for 5 or 10 years, maybe I just need a word and he will deliver the results I want. It's a process where you treat it as an experienced, skilled, but not very deep partner with whom you don't have a deep relationship, and refine your interactions with it. Hu Gangqing: Yes, I agree with Mr. Xu's point of view. My personal experience is that when dealing with GPT, I treat it as a student, a student who is very capable and doesn't easily annoy you and will please you. He must give him guidance on what he wants to do, and it is not okay to let it go. After coaching, he will be very productive. I see it as a student who is diligent and studious, corrects mistakes when he knows them, and will use various methods to make you happy. Xu Dong: Yes, next question. Experimental biologists want to switch careers to bioinformatics, does the emergence of large language models raise or lower the threshold for learning to change careers? Also, is there a survey report on how many bioinformaticians are starting to use GPT or similar engines? What do you think? Yu Lihua: I think the bar is definitely lowered, but the other question is whether you still need to transform. Because bioinformatics is a very powerful tool, if you are an experimental biologist in the past, but lack the ability to analyze data and can't program, then you are lame and need to find a bioinformatician as a partner. In this situation, do you need to transform? I feel like that's probably the more important question. You can easily acquire some skills, so why make the transition? Xu Dong: I couldn't agree more. The threshold for entering this field is indeed lowered, but the career threshold for the bioinformatics industry will be higher and higher. In other words, people who can eat this bowl of rice will have higher and higher requirements in the future, not as in the past. In the past, you may only need to run a few pipelines to eat this bowl of rice, but now with a large model, you don't need so many people to do it, so the overall requirements of this industry are also improving. Therefore, you can't carve a boat for a sword. I also agree, unless you have a lot of determination to move into this industry. Xu Dong: It's true that our industry is quite good, but if you just want to do some simple analysis, then ordinary biologists can do it now, then if you don't do experiments, then it may not work. So I couldn't agree more. Also, I don't know much about whether there are reports of how many bioinformaticians are starting to use GPT, but what I do know is that most of the people I know who do bioinformatics are using this thing, and probably very few people are not using it. Hu Gangqing: Yes, I know basically all the people I know. Of course, I'm not sure if there is a dedicated questionnaire for bioinformaticians, or if it has been published, or if it is already publicly available. But there are some questionnaires that are aimed at ordinary students, and judging from those questionnaires, even ordinary students use more people. Xu Dong: Yes, the next question is, what are the good applications of GPT in other fields, such as financial technology? What is the core direction and idea of large language model empowering financial technology? Mr. Xia, you may have the most right to speak. Xia Chun: We have seen some first-wave applications, such as knowledge base types, mainly tool-based services, especially customer service, and the customer service of financial banks is always on the first wave, because it can quickly replace manual work and answer more accurately. Including the absence of all kinds of bizarre accents, this is the fastest. In the future, how it will be able to enter the financial transaction link still feels a little far away, but some people are thinking about it. I'm also wondering, to a certain extent, whether it will change people in the investment community, or even in the financial circle. If we really had to change, society would be very different, but I believe the process will evolve. For example, after having an Agent, there will be new changes in how the personal financial Agent helps manage money. Yu Lihua: Mr. Xia, what do you think? For example, analysts, a lot of their job is to gather information and organize it into reports. Xia Chun: Analysts belong to the first wave to be affected, and now they are more miserable. Some majors are actually quite rudimentary, and these are first replaced, especially for analysts, I think they are particularly accurate, and now the reports produced by GPT are better than those done manually. Dong Xu: Next question, can you introduce the good application and practice of using large language models to accelerate drug development? You may have a say. Yu Lihua: If we use large language models, we use them more extensively, such as protein large language models, I think it is like designing antibodies, protein drugs, David Baker is familiar to everyone. It's not clinical, but I think it's a matter of time, because protein sequences are a language. Including a lot of startups doing it, I think protein large language model design antibodies, protein drugs may be a promising field. Of course, ChatGPT has also started to be used, and some of the pharmaceutical companies that I know, they have indeed applied it in different ways, and they are trying to use it in all directions and at all stages. In the R&D stage, for example, there is a lot of work to deal with texts, to deal with papers, and we are also faced with things like the Automate I just talked about, how do you know which is your good target? We need to read a lot of literature, so I think that large language models have actually greatly improved our efficiency in terms of organizing literature and comprehensive discovery. Hu Gangqing: I would like to give a suggestion to the students, we are always worried about whether they will be replaced. The last time I gave a presentation at school, a teacher asked if medical students would be replaced. I said I wasn't worried about whether they would be replaced, I thought about whether I would be replaced. But the suggestion is that no matter what field you go to, there is a big difference between those who can use AI and those who can't, so those who can use AI may replace those who can't. Yu Lihua: Another point, I think that in the long run, people's careers are a process of continuous learning. There is a big difference between what we learned in school back then and what we do now. So I think it's a continuous learning process. Xu Dong: Okay, all the questions have been answered, and it's time, so we can wrap it up. Thank you again to Leifeng, GAIR Live and Science and Technology Review for providing the platform, and thank you to all the guests and the audience for participating in our discussion. Thank you.
租售GPU算力租:4090/A800/H800/H100售:现货H100/H800
It is especially suitable for enterprise-level applications, and scan the code to learn more ☝