Tag: AI

  • Data Security Concerns Over the Use of Generative AI Tools

    A study by an Israeli firm Team8 got widely picked up by media outlets because of the concerns it raises about corporate secrets and customer information. 

    As one report says: 

    “The report said that companies using such tools may leave them susceptible to data leaks and laws. The chatbots can be used by hackers to access sensitive information. Team8’s study said that chatbot queries are not being fed into the large language models to train AI since the models in their current form can’t update themselves in real-time. This, however, may not be true for the future versions of such models, it added.”

    Bloomberg News covered the study first and is said to have received it “prior to its release.” As the Bloomberg report says: 

    Major technology companies including Microsoft Corp. and Alphabet Inc. are racing to add generative AI capabilities to improve chatbots and search engines, training their models on data scraped from the Internet to give users a one-stop-shop to their queries. If these tools are fed confidential or private data, it will be very difficult to erase the information, the report said. 

    Read the complete Bloomberg report on the Team8 study here.

  • Generative AI in Finance

    This report in Forbes covers the research paper released earlier by Bloomberg introducing BloombergGPT, which applies ChatGPT-style machine learning techniques to financial datasets, those available in Bloomberg’s own vast repertoire and beyond. 

    Forbes‘s “back-of-napkin cost estimation” speculates that just the cost of Amazon Web Services cloud computing used to generate these models would have been to the tune of “$2.7 million to produce the model alone.” 

    After listing the datasets that were used to train these models, the report goes on to speculate about the uses that BloombergGPT could potentially be put to, like drafting Securities and Exchange Commission (SEC) filings, researching companies, individuals, and their linkages, drafting market reports and summaries, fetching financial statements etc.

    Read the full Forbes report here. Read the Bloomberg announcement of BloombergGPT here.

  • Countering Fake News and Ideological Bias in Reporting

    Ground News is an interesting effort to counter fake news and ideological bias in reporting. According to their website

    “Ground News is a News Aggregation platform that helps users expand their view of the news and easily compare how a story is being reported across the political spectrum. We identify all news articles written on a story and arrange the organizations reporting on the event into categories of political bias, geographic location, and chronology. News is aggregated from over 50,000 news sources, including many alternative, independent sources that aren’t confined to the mainstream news narrative. This puts our community in a position to choose and easily compare the news they want to read, not just have it pre-selected for them by algorithms designed to drive clicks.”

    Some features on both the website and app versions are behind a paywall, but either version can be used/downloaded for free. 

    Visit the Ground News website here.

  • The CERN of AI Research

    Large-scale Artificial Intelligence Open Network (LAION) has launched this petition to “democratize AI research by establishing an international, publicly funded supercomputing facility equipped with 100,000 state-of-the-art AI accelerators to train open source foundation models.” They are calling such a proposed facility “a CERN for open source large-scale AI research and its safety.” 

    Significantly, the petition has this to say on AI Safety research: 

    “The proposed facility should feature AI Safety research labs with well-defined security levels, akin to those used in biological research labs, where high-risk developments can be conducted by internationally renowned experts in the field, backed by regulations from democratic institutions. The results of such safety research should be transparent and available for the research community and society at large. These AI Safety research labs should be capable of designing timely countermeasures by studying developments that, according to broad scientific consensus, would predictably have a significant negative impact on our societies.”

    LAION’s website describes them as a non-profit “aiming to make large-scale machine learning models, datasets and related code available to the general public.”

    Read the full petition here. Read more about LAION’s work and philosophy on their team blog here.

  • Preventing Harm Caused by Machine-learning

    “As a leading researcher on the ethics of artificial intelligence, Timnit Gebru has long believed that machine-learning algorithms could one day power much of our lives,” writes Emily Bobrow in this profile for the The Wall Street Journal.

    “Because machine-learning systems adopt patterns of language and images scraped from the internet, they are often riddled with the internet’s all-too-human flaws” and Gebru is well-known for her work in trying to change that. As Bobrow points out:

    “For years, Dr. Gebru earned notoriety as an in-house AI skeptic at big tech companies. In 2018, while she was working at Microsoft, she co-authored a study that found that commercial facial-analysis programs were far more accurate in identifying the gender of white men than Black women, which the researchers warned could lead to damaging cases of false identification. Later, while working at Google, called on companies to be more transparent about the errors baked into their AI models.”

    Gebru “hopes for laws that push tech companies to prove their products are safe, just as they do for car manufacturers and drug companies.”

    At Distributed Artificial Intelligence Research Institute (DAIR), a non-profit she launched in 2021, “Dr. Gebru is working to call attention to some of the hidden costs of AI, from the computational power it requires to the low wages paid to laborers who filter training data.”

    Read the full article here.

  • Claude Shannon and Information Theory

    In this tribute for Quanta Magazine, Stanford professor David Tse highlights the remarkable contributions of Claude Shannon.

    Summing up Shannon’s foundational contribution to information theory, Tse writes: “in a single groundbreaking paper, he laid the foundation for the entire communication infrastructure underlying the modern information age.” Shannon “applied a mathematical discipline called Boolean algebra to the analysis and synthesis of switching circuits.” This was such an important development that it “is now considered to have been the starting point of digital circuit design.”

    All our digital communication technologies can be traced back to Shannon’s work. For instance, consider:

    “Another unexpected conclusion stemming from Shannon’s theory is that whatever the nature of the information — be it a Shakespeare sonnet, a recording of Beethoven’s Fifth Symphony or a Kurosawa movie — it is always most efficient to encode it into bits before transmitting it. So in a radio system, for example, even though both the initial sound and the electromagnetic signal sent over the air are analog wave forms, Shannon’s theorems imply that it is optimal to first digitize the sound wave into bits, and then map those bits into the electromagnetic wave. This surprising result is a cornerstone of the modern digital information age, where the bit reigns supreme as the universal currency of information.”

    Read the full article here.

  • Using AI to Detect Patterns in Animal Communication

    This article published by the World Economic Forum delves into the potential of AI analysis of “the vast amounts of animal communication data that is being collected with increasingly sophisticated sensors and recording devices.”

    The process “includes analysing large data sets that contain visual, oral and physical animal communications.”  “The goal,” according to researchers, “is to determine under what conditions an animal produces a communication signal, how the receiving animal reacts and which signals are relevant to influencing actions.”

    To arrive at a richer understanding of animal communication, “AI-powered analysis of animal communication includes data sets of both bioacoustics, the recording of individual organisms, and ecoacoustics, the recording of entire ecosystems, according to experts.”

    Importantly:

    “There are ethical concerns that researchers are confronting, too. This includes, most notably, the possibility of doing harm by establishing two-way communication channels between humans and animals—or animals and machines.”

    Read the full article here.

  • OpenAI's Plans for AGI

    OpenAI has made unprecedented waves in the field of AI with ChatGPT. As a key player in the field, this mission statement of sorts about their plans regarding AGI, attributed to their CEO Sam Altman, makes for necessary reading for people with an eye on AI, if not for every literate citizen of the world.

    Read about OpenAI’s plans regarding AGI here.