When Bill Gates – co-founder of Microsoft and creator of the software that revolutionized personal computers – calls AI “the greatest technical advance of my lifetime”, it’s impossible not to stop and think: “Wow”.
In an ABC-TV interview with Oprah Winfrey earlier this month, Gates shared his perspectives on the potential of generative AI. He believes that this technology could transform various aspects of society. In the health sector, for example, AI could act as a “third person” in medical consultations, providing real-time translations and summaries of what health professionals are communicating. In addition, it will become an educational assistant, offering each student a personal tutor who will always be available.
The comment from Gates that really caught my attention was about the impressive speed with which generative AI tools, introduced to the world almost two years ago with the launch of ChatGPT by OpenAI, have advanced.
“This is the first technology that is evolving faster than even the experts expected,” Gates told Winfrey. While acknowledging the potential benefits of AI, he also expressed significant concerns about the risks involved.
And he is not alone in this view. Former Google CEO Eric Schmidt shared a similar concern last year, warning that “people will not be able to adapt” to a world permeated by AI.
Gates believes that the speed of AI development requires companies to collaborate with governments to establish regulations to ensure that the technology does not harm our economy, among other things. Last week, the United Nations also spoke out on AI governance in a new report entitled Governing AI for Humanity.
Gates is not alone in this opinion. Sam Altman, CEO of OpenAI, who also spoke to Winfrey in the same special, pointed out that “there’s been a pretty steep rate of improvement” in AI systems. He suggested that AI developers need to work together with the government to create safety protocols for these systems, just as they do with aircraft or new drugs.
After this initial collaboration, Altman said: “We will have an easier time defining the regulatory framework later on.”
Considering the adage that history tends to repeat itself – with new technologies being introduced (like social media) and governments trying to regulate them only after they’ve caused damage – I wonder if the conversations Altman mentioned having with government representatives “every few days” should have taken place before the launch of ChatGPT.
Here are some other AI initiatives that deserve your attention.
Oprah discusses AI, but misses chance to dig deeper with OpenAI’s Altman
Talking about Winfrey’s special, AI and the Future of Us, now available on Hulu, I mentioned last week that I would do a review. Leaving aside Altman and Gates’ conclusions, I have to say I was disappointed by the questions Winfrey didn’t ask Altman.
Specifically, it was not addressed when or if he will share details about the data used to train his popular chatbot. This question is important, especially since OpenAI and one of its backers, Microsoft, are being sued by The New York Times. The lawsuit alleges that the company scraped the NYT’s content library without permission, attribution or compensation to train the Large Language Model (LLM) that powers ChatGPT.
Lawyers and legal academics claim that this lawsuit represents the “first major test for AI in the field of copyright”.
Although OpenAI has not revealed the data used to train its model, it argues that any copyrighted content that was copied from NYT and other creators to develop its for-profit chatbot would be protected by the “fair use” doctrine.
I don’t know who will come out on top in the lawsuit, but considering that Winfrey is one of the world’s most influential content creators and that authors, artists and publishers have expressed concerns and filed lawsuits over the alleged appropriation of their intellectual property as training data by AI companies, it would be reasonable to expect that she would have asked Altman about this issue.
It looks like we’ll have to wait for the next special to see if these issues are addressed.
The “godmother of AI” wants to help you create new worlds
If you follow AI news, you may have heard of the “godfathers of AI” – computer scientists Yoshua Bengio, Geoffrey Hinton and Yann LeCun, who frequently share their opinions on the risks, opportunities and speed of the technology’s development. Last week, it was the turn of Fei-Fei Li, a renowned AI researcher and professor at Stanford University, to take the spotlight. Considered the “godmother of AI”, Li launched a new company called World Labs after raising an impressive $230 million.
World Labs is dedicated to developing LLMs focused on “spatial intelligence”, promising to create systems capable of “perceiving, generating and interacting with the 3D world”.
What does this really mean? Technology journalist Steven Levy, writing for Wired, explained that the aim of World Labs is to teach “deep knowledge of physical reality to AI systems”. This will allow artists, designers, game developers, movie studios and engineers who use these AI engines to become true “world builders”.
World Labs’ first product is expected to be launched in 2025, an indication of how quickly AI is evolving. Optimism about Li’s potential is high, with his startup already valued at more than $1 billion.
How much energy and water does an AI need to generate a short email?
We know that computing has a significant environmental cost. There are expenses related to supplying energy and cooling the server farms that house processors, software, computers, network equipment and other technologies that connect us to the internet on a daily basis.
But what is the environmental cost of a simple consultation with a chatbot? The Washington Post decided to investigate, in partnership with researchers from the University of California, Riverside. They found that generating a single 100-word email using OpenAI’s GPT-4 model, which powers ChatGPT, consumes 519 milliliters of water – slightly more than a full water bottle. That same email also uses 0.14 kilowatt-hours of electricity, equivalent to the consumption of “14 LED bulbs for 1 hour”.
It’s worth exploring the study to understand the cumulative impact of these costs, especially considering that, according to the Pew Research Center, around a quarter of Americans have used ChatGPT since its launch.
Public libraries can be an ally in the fight against disinformation generated by AI.
The Urban Libraries Council has released a valuable briefing on how public libraries can take advantage of their position as community spaces to encourage face-to-face encounters. As well as helping to combat social isolation in an increasingly digital world, these libraries can offer tools and workshops that teach people how to spot misinformation and misleading content spread by digital platforms.
“Several studies show that disinformation tends to thrive in highly polarized societies or in communities with low levels of social connection,” the council pointed out in its 10-page summary, entitled The role of libraries as public spaces in combating disinformation, misinformation and social isolation in the age of generative AI.
Among the library programs that have already stood out, the council mentioned the Boston Public Library, which hosted a workshop in August aimed at combating misinformation. The event focused on teaching digital literacy skills and offered tools to help people “identify accurate information on the internet”.
It’s worth remembering that, according to the American Library Association, there are more than 123,000 public libraries in the USA.