Skip to content

Engaging with concerns over data abuse? Join the USF Center for Applied Data Ethics to contribute and make a difference.

Discover the past three months of CADE's activities and learn how you can join in on the action.

Engage with concerns over data misuse? Join the U.S.F. Center for Applied Data Ethics and partake...
Engage with concerns over data misuse? Join the U.S.F. Center for Applied Data Ethics and partake in efforts to address data ethics concerns

Engaging with concerns over data abuse? Join the USF Center for Applied Data Ethics to contribute and make a difference.

In the rapidly evolving world of Artificial Intelligence (AI), concerns about ethics and accountability are gaining prominence. Here's a roundup of some of the latest developments and upcoming opportunities to learn more about these pressing issues.

At a recent Data Ethics Seminar, Deborah Raji emphasized the urgency of AI ethics and accountability work. One of the talks at the seminar, "Getting Specific About Algorithmic Bias," delved into various types of bias, debunked misconceptions, and shared steps towards solutions.

The topic of bias in healthcare algorithms has been a subject of intense scrutiny. Researchers, including Obermeyer and colleagues, have identified that widely used healthcare algorithms in the U.S. often predict high healthcare costs rather than actual illness severity. This bias, stemming from the fact that Black patients and other underserved groups often receive less care, can lead to biased and inequitable treatment recommendations. Cedars-Sinai published a study showing that large language models used in psychiatry exhibited racial bias, offering different treatment recommendations for African American patients than for white patients with similar conditions, particularly in schizophrenia and anxiety cases.

These findings have prompted legislative actions, notably in California with SB 503, which requires AI healthcare tools to be tested for bias to ensure equitable treatment for diverse populations, and AB 3030, requiring disclosure when AI is used in clinical communications.

Regarding Russian influence operations in African countries, no direct information was found in the results. For up-to-date investigative reports and analyses, it is recommended to look into security think tanks, international relations research, or news outlets focusing on geopolitical influence and misinformation in Africa.

Criticism of SideWalk Labs' Indigenous consultation as "hollow and tokenistic" was also noted, but specific data was not found in the provided results. For more detailed insights, specialized reports, Indigenous advocacy group statements, or investigative journalistic sources on SideWalk Labs' community engagement practices would be recommended.

For those interested in staying engaged, upcoming events and opportunities to learn about and engage with issues of data misuse—including bias, surveillance, and disinformation—include legislative panels, medical AI ethics conferences, grants, and workshops. For instance, the Tech Policy Workshop is being hosted Nov 16-17 by the USF Center for Applied Data Ethics in San Francisco.

Full-time data ethics fellowships are also being offered, with applications reviewed after November 1, 2019, and roles starting in January or June 2020. Brian Brackeen, a former founder of a facial recognition start-up, spoke about issues of racial bias in facial recognition and his current work funding under-represented founders in tech at a previous event.

To stay informed about data ethics events and news, consider joining the CADE mailing list. For a deeper understanding of the historical and cultural backdrops for ethical and just problems in technology, it's worth following Ali Alkhatib, another speaker at the Data Ethics Seminar.

In summary, the issues of data misuse, including bias, surveillance, and disinformation, continue to be urgent and pervasive. To learn more, engage, and make a difference, consider the opportunities and key sources outlined above.

  1. Machine learning algorithms in healthcare have been found to display bias, as identified by researchers such as Obermeyer and colleagues who pointed out that these algorithms often predict high healthcare costs instead of actual illness severity.
  2. The topic of bias in healthcare algorithms has led to legislative actions, with California's SB 503 requiring AI healthcare tools to be tested for bias to ensure equitable treatment for diverse populations.
  3. To learn about and engage with issues of data misuse, including bias, surveillance, and disinformation, upcoming events and conferences are available, such as the Tech Policy Workshop being hosted Nov 16-17 by the USF Center for Applied Data Ethics in San Francisco.
  4. Full-time data ethics fellowships can be applied for, with roles starting in January or June 2020.
  5. For those interested in data ethics, joining the CADE mailing list and following speakers like Ali Alkhatib can provide valuable insights into the historical and cultural backdrops for ethical and just problems in technology.
  6. In the field of Artificial Intelligence (AI), learning opportunities exist beyond just adopting courses (course) in AI or technology. Education-and-self-development platforms offering online-education in data-and-cloud-computing, cybersecurity, and ethics, particularly in relation to AI, are becoming increasingly crucial.
  7. It's crucial to recognize the importance of research in understanding and addressing the urgent ethical concerns surrounding AI, which is a key aspect of Artificial Intelligence (AI) itself. In this evolving landscape of AI, ongoing learning and engagement will help drive accountability and advancements in technology.

Read also:

    Latest