Dramatically improved accuracy of translation tools
「先週、山口と広島に行った。」 and 「先週、山際と広島に行った。」
Until just a little while ago, translation tools could not distinguish and translate these two above sentences very well. The first sentence is “I went to Yamaguchi and Hiroshima last week.” The second sentence should be translated as “Yamagiwa and I went to Hiroshima last week.” Now, translation tools also translate like this. However, the previous translation tool translated the second sentence as “I went to Yamagiwa and Hiroshima last week”, only 3-4 years ago.
I use Google Translate and another translation tool called DeepL. These days, I mostly use the latter. In the past, I used to have my own English sentences translated into Japanese, and then check the finished sentences to see if the original English content could be understood. I didn’t use many translations of Japanese sentences in English as I didn’t trust the Japanese reading comprehension ability of the translation tool. Recently, however, I’ve had Japanese sentences translated into English, and I’ve been impressed by the clever expressions in English that I couldn’t have come up with on my own. As a draft, there are no complaints at all. The accuracy of translation tools has improved dramatically.
I have been using the tool DeepL for a long time because of its “algorithm that searches translation sentences by vocabulary, idioms, and phrases to derive the best translation examples,” and its “huge dataset”. It seems that the current version was completed by making many improvements to the neural network that forms the basis while utilizing such a foundation. As an AI translation tool, we can say that it is very complete.
ChatGPT: Natural sentences, but also incorrect answers
ChatGPT is one of the most talked about AI technology as of recent. “Chat” is, as you know, a real-time conversation on the Internet. When you access our website, for example, a chatbot will appear on your screen and answer questions about our products and services conversationally. “GPT” is an acronym for Generative Pre-Trained Transformer and is a model that learns in advance and generates new sentences by itself. It is commonly called generative AI. Until now, there have been many AI assistants, such as the chatbots mentioned above, that have pre-determined patterns of “answers like this when asked like this”. On the other hand, ChatGPT picks up content suitable for answers from a huge amount of information available online and creates natural sentences.
If ChatGPT were a person, for example, they would be someone who is confident and pretends to know everything when asked. When asked, the tool instantly creates natural and smooth sentences according to the request. It can translate and summarize sentences and even generate code for computer programs, but can also answer the wrong thing calmly. It’s natural to be calm. The tool is a computer, so there’s no upset in its heart. Many of my acquaintances play with ChatGPT by making them give wrong answers and sharing their content. Also, ChatGPT does not provide the materials on which answers are based, and there is a risk of copyright infringement.
How to use ChatGPT?
When given a task or asked a question, ChatGPT collects as much information as possible in a short amount of time and explains it in natural sentences. Recently, there seems to have been some discussion whether it could be used as a basis for questions and answers in the Diet, and I was a little surprised. This is because ChatGPT will not hesitate to give you wrong answers. To reduce the work of bureaucrats and have effective discussions, it would be a shortcut for members of the Diet to study a little more by themselves.
I think the tool will be very useful for university students to create reports for assignments. However, it is necessary for students to be aware of the risk that the sentences generated by ChatGPT may contain lies. It’s crucial to check them with a critical eye and correct them accordingly. In addition, since the questions you enter and the instructions you give are stored and learned, there is a risk that information that you do not want to disclose, such as personal information, may be leaked. ChatGPT is really smooth and creates sentences that look like accurate, and it is likely to lead to good grades, which is why students can’t stop using it. At universities, it will be difficult to evaluate students unless they are combined with oral examinations and written examinations on the spot.
There is still a wide range of ideas regarding the use of ChatGPT depending on the university. For example, the use of ChatGPT for creating essays and presentation materials is prohibited at the University of Cambridge and the Paris Institute of Political Studies. At the University of Tokyo, reports are supposed to be created by the students themselves, and they cannot create them using only generative AI.
Scope and limits of generative AI
At the beginning of this blog, I gave an example of how translation tools have improved dramatically over the years. Even now, the final translated text still needs to be checked by the author, but we can use it as is to a large extent. Generative AI will also progress at an accelerated pace in the future. Nevertheless, I believe that the situation will continue for the time being where humans need to stand on top and make judgments with critical thinking. No matter how smooth and like-looking sentences are made.
So, will Japan Stratus Technology ever use generative AI? I’m assuming it could write a proposal. It could also write e-mail texts for an unspecified number of people. However, considering that it should not be used for anything related to confidential or personal information, the scope seems narrow.
I can assure you all, I wrote this blog myself. If it was written entirely by ChatGPT, the tone may have sounded a little more mature.