Imagine you are a lawyer and you have three hours to file a legal appeal that you received at the last minute. The truth is that, among all the tools you have at hand as a professional, none can guarantee you a sufficiently useful way to solve a specific case with a specific methodology in such a short time… Except one that could help you answer your request in half an hour.

Such was the case of one of the monitors of the former rector of the Universidad de la Sabana (Colombia), Obdulio Velásquez, who is now a professor at the Faculty of Law and Political Science of the same university, when he used ChatGPT as a support tool for his work. The academic admitted to FORBES that he has been using artificial intelligence in various experiments in his classes.

One of the most recent took place at the beginning of December, when he graded a paper of his graduate students in the subjects of Obligations and Civil Liability for which they had a week. Days later, he decided to ask the same questions to artificial intelligence (AI) and found that it was able to score 4.5 out of 5 when confronted with more or less technical questions on American and common law. While it was able to answer with very well-written answers, some of them were ambiguous and some were wrong.

That’s why it’s worth asking how smart artificial intelligence is or can be.

“Another thing I do is that I expose my class and at the end I ask him some basic concept questions. Some of them go more or less well, but with errors (…) I don’t want my students to be afraid of it, and there should be no panic for them to use it. We have to rethink ways of elaborating”, said Velásquez.

It must be admitted that last November 30, the exact moment when ChatGPT was released, the world knew almost immediately that it was going to be a disruptive way of using the Internet, even without reaching its full potential. The concept of AI as we know it so far may be even broader in the future considering how quickly it is currently being adopted.

“In five days it reached one million users, and in two months it’s almost 100 million users. I think it is the fastest adopted service so far and here I think there are two issues: the ease of consumption and the paradigm shift for companies,” Christopher Weisz, Managing Director & Partner of BCG X, the technology design and construction unit of this consulting firm, told FORBES.

The first point has to do with the fact that you don’t have to be a technical person to use it, but you do need to know how to ask the questions and lead the conversation to reach an expected result. Regarding the second, the paradigm shift allowed lowering the barriers to entry for companies, since a hyper-advanced analytics infrastructure is not required, but it is enough to have “data, talent and infrastructure”.

Innovation and disruption

For business people, a first clarification is obvious: strategy as such is not AI. At least that’s the analysis of Solve Next founder and CEO Greg Galle, who believes that the key is to make sure there is alignment at the board leadership level where the potential of advanced analytics to carry out a timely strategy is seen.

“That is where there is an important job of identifying the areas where artificial intelligence can generate greater value, prioritizing those use cases and ensuring that there is funding to develop them, and it is not just an exercise for the technical team,” said the executive of the leading Silicon Valley innovation company in a conversation with FORBES.

Going into this detail makes the AI world split in two: the narrow and the general. The former is shown in that the topic permeates much more into organizations: developing AI for a specific task in a business, impacting the value chain and the personalization of its offer; to do so, both transactional and contextual and demographic variables are used.

Then comes the other side of the coin that is much more complex and ambitious in its implementation: creating computers and systems that can learn many of the tasks that a human being can do.

Another thing to keep in mind is that, according to Weisz, the AI setup has a well-defined ratio: while 10% of the effort and value comes from algorithms, 20% comes from the technology stack and the remaining 70% is pointedly focused on people process transformation and change management.

“Today there is a logic in the construction of these tools, and that is that parts of that 70% must be in the hands of the people who operate these devices. That piece of knowledge logic has to be reflected in the tools. Something that helps adoption is that they are not black box models, and ChatGPT and these others are black box models because we still do not understand what is inside,” explained the expert.

In the short term, that human knowledge is going to be extremely important to help the construction and implementation of these tools, so the way forward must be guided by a more amplified vision where people can focus on adding value and strengthening current weaknesses, such as fact checking. This point alone makes one thing clear: human intelligence will have to continue to coexist to ensure that its content is coherent.

That is why it is worth asking how artificial AI can be.

The challenges

In fact, many sectoral maps are already looking at what the challenge is in terms of access to AI. One of these is the defense sector, something Galle can speak to with some propriety after working hand-in-hand with the North Atlantic Treaty Organization (NATO) to review the technological advances being advanced in its member countries around artificial intelligence.

“There was a lot of nervousness about how weapons systems could be dominated by AI. However, they realized that in other nations they would already be using that in that way. It’s a combination of ethical and practical tension about how it’s used, not just in the defense sector but in consumer experiences that can improve,” he stressed.

AI can even be used in monitoring social networks, and so young people themselves can be part of an organization that fights against child exploitation, monitoring photographs, detecting crimes such as child pornography and credit card fraud, helping authorities to strengthen the surveillance systems of the authorities in those fields. “There are many applications that we will see emerge, but we are in the early days of learning and discovering how that will happen and how progress will be made (…) People should not worry about the technology but about the people who use it,” he said.

This demands that society, the public and private sectors, and even educators themselves, think together about how to create those jobs that might not be so well paid today. If there are going to be positions that are going to rebalance the labor scheme, either because of the technology itself or because of the change they represent to the way of doing things, the impact is going to be in the way in which the needs that arise in other care sectors are going to be covered.

“In order to take it to the next level of productivity, let’s think about making a much more augmented decision on the basis of data and optimization with a more global view and not just to a particular process. That is done with that change management, and that is why the term is not artificial intelligence, but amplified intelligence: it is not that the algorithm is going to replace the operator of a mining plant, but what other variables there are around the type of ore that is coming in and how I can make a better decision to increase my recovery from the plant,” Weisz stressed.

How to regulate it?

In that quest for amplified intelligence, it’s hard to know who is going to come out on top. Everyone wants a piece of the pie. Microsoft, Google, Meta are just some of the players who are headlong into the issue. Even Elon Musk himself, who in recent years wants to be present in everything, is looking to create an AI to compete with ChatGPT. Although to be frank, the more competition the better.

“The fact that there is no single system dominating prevents us from becoming hostage to it. We’re not looking for a universal truth but ways for these technologies to improve our lives. We’re going to be dealing with more complex issues where we’re going to need assistance from machines, so that competition and variety is a healthy thing,” Weisz added.

Now, there’s no rush to find all the answers either, as we’re just getting our feet in the water. As we are still in the “early days” of developing this technology, the concepts with which we are now familiar will change quickly. For the executive, with the quality of thinking of the experts immersed in the subject we can take advantage of the acceleration of progress and find more advanced solutions.

On the contrary, if you put IBM’s quantum computer with all its capabilities, that can lead to absolutist or controlling systems, and we are seeing models of total information control, according to Velásquez. And here we return to Elon Musk as a reference example, as his purchase of Twitter is also a very complex free speech issue, whose debate cannot be evaded when talking about AI. “Facebook, Amazon, Apple and Twitter decide what goes out and what doesn’t go out, and if that’s done by humans, AI poses risks,” he said.

With this debate on the table, it’s not just Google, Microsoft and OpenAI sitting down to shape that self-regulation, but it’s an issue that all companies, regardless of the sector they belong to, are going to have to get into to avoid that gray area of legal conflicts with the authors or owners of information that could be used by an AI.

That is seen more pointedly in large companies with this logic of generative AI modeling models explained by Weisz. This logic changes the paradigm, because smaller companies can take a functional model of these large firms and do “fine tuning” with megabytes of information to be able to use it in their operations, including actions such as image and web error detection.

“With about 50, 100 or 200 defect images of your process, you can have a fed algorithm that helps you do that detection much faster and much cheaper than making a model from scratch in your operation. That’s increasing access for people who don’t have to be technical, but also for companies that can adopt these kinds of technologies and apply them to their model to make a profit,” he stressed.

The caveats
All the advantages outlined above do not leave aside the risks that still exist about AI: these technologies are being trained in an unsupervised or semi-supervised way, which does not allow them to avoid consuming or feeding back content where, inevitably, there are biases.

“I believe that every innovation and new technology has a potential to be used with bad intentions, and an associated risk that can generate fears. So much so that webmakers were scared about whether it will be the end of the world as seen from emerging technologies. I have mixed feelings because we must find ways to mitigate the risk. I don’t subscribe to that dystopian vision of a world where robots are going to dominate us, and I don’t think we’re going there, but I have no doubt that there will be people who will use AI for bad things and we have to be aware of that,” Galle said.

It is worth remembering that the first version of ChatGPT, released in 2018, originated interactions with a fairly strong bias towards sexism and racism thanks to the information that was consumed from the internet without having any kind of review. What’s helping now are tools like GPT3, an autoregressive language model that employs deep learning.

“In addition to having all the trajectory of different algorithms, they applied a reinforced learning logic where a human was inside the process and told you if it was indeed correct or not what was going on around it,” Weisz said in his talk with FORBES.

“What I can’t say anymore, even if I wanted to, is that it doesn’t exist anymore or that it will be banned. It’s not going to go away. So you have to minimize those risks and those responsible for those technologies have to ensure their veracity. It is something that so far is in training,” concluded Velásquez.

Clearly there is still an important information bias, which obliges us to ensure that responsible practices are implemented in the construction of these models. In addition, there is an issue that is not minor and that will have to be defined from the regulatory spectrum: did those data with these large linguistic, photo and video models have permission to be trained on that content? And if they had permission, are those authors being paid? There’s a whole issue about the use of consumption and consensus of that information that cannot be overlooked if we want AI to be a tool that we can trust.

Categorized in: