Community connecting tech, policy and politics
Screen Shot 2019-01-29 at 12.28.40 AM.png

TheBridge Events

TheBridge has partnered with leading tech, policy and political organizations including Microsoft, Google, Twitter, WeWork, Postmates, The US Chamber's Technology Engagement Center (C_TEC), National Journal's Policy Makers Network, Verizon, Dell, Lyft, Quorum, Airbnb, 1776, DCode, The United States Digital Service (USDS), The Federal Communications Commission (FCC), Vrge Strategies, The Bank Policy Institute, Women's High-Tech Coalition, and more. 

Our events have included policy discussions, career networking events, Women in Tech series, resume and career workshops. How do you want to partner? Email us!  

We announce our events in TheBridge Update (sent on Tues & Thurs) Click below to sign up to receive the Update and stay up to date!

TheBridge events are unique. They drive a conversation between the public and private sector, promote understanding and collaboration while catalyzing meaningful and lasting connections. Our events range from intimate discussions among industry leaders, working groups and roundtables, to larger events broadcast live on C-SPAN-TV and have attracted all major outlets including PBS Newshour, Bloomberg, Reuters, The Wall Street Journal, TechCrunch, Recode, Politico, among others.

Want to attend an event? The best way to stay up to date with TheBridge events is through our bi-weekly community updates.

The Truth and Consequences of AI: Getting Ahead of Risk, Regulation and Ethics

TheBridge and the FiscalNote Executive Institute (FNEI) hosted an exclusive, off-the-record discussion with industry experts about taking a strategic approach to AI.

Most companies understand the value of artificial intelligence. The greater challenge? Getting ahead of the potential risks and ethical pitfalls AI can create.

Featured guests included Avi Gesser, Partner at Debevoise & Plimpton; Renée Cummings, Founder & CEO, Urban AI, LLC, East Coast Regional Leader, Women in AI Ethics and Data Activist in Residence, University of Virginia; and Vlad Eidelman, Chief Scientist and Head of AI Research at FiscalNote. Allie Brandenburger, Co-founder and CEO of TheBridge, led the discussion.

Key takeaways from the program can be found in the executive summary prepared by FNEI:

 Most regulators are approaching AI like they approached cybersecurity.

  • A lot of the same regulators who focused on privacy and cybersecurity are now focused on AI.

  • Most regulators are zeroing in on three key areas: privacy, accountability, and vendor management.

  • On the privacy side, regulators are worried about data being used for purposes outside of what was originally intended. They want to ensure that companies have the right to use the information they’re putting into their model and that all the proper consents and notices are in place.

  • On the accountability side, regulators want to see a senior manager or a committee in place to oversee AI regulatory compliance and risk mitigation. Disclosure and transparency are also on their radars: Regulators want to ensure that people know there’s a decision that has been made by a machine, and what appeal rights they have if they don’t like that decision.

  • On the vendor management side, regulators are concerned about companies that are outsourcing their AI to third parties without any knowledge of that party’s internal AI policies or risk mitigation. They want companies to establish an AI vendor risk framework, which may include questionnaires, risk assessments and other contractual provisions that ensure some level of quality control.

  • For now, most regulatory efforts will mimic what happened with cybersecurity: Piecemeal, state-level laws built from a patchwork of existing consumer protection law, existing anti-discrimination law, and existing privacy laws.

Until firm regulations are in place, it’s up to companies to do their due diligence.

  • With no federal AI regulations or firm guidelines in place, companies must be proactive about mitigating the business risks of AI, as well as protecting themselves against future regulations.

  • Companies need to take the lead and create internal structures and governance around AI. Possible strategies include: 1) Implementing a risk rating system that clearly identifies high-risk AI. 2) Reviewing inputs to make sure they line up with intentions. 3)Testing outputs to ensure they’re not behaving in a way that is unfair or erratic. 4) Gathering an internal team comprising different departments (i.e., legal, compliance, audit, HR, business) to thoroughly analyze AI from all angles and help reduce risk and identify potential bias. 5) Establishing a review process for third-party AI vendors to confirm they meet the same standards as internal AI.

  • Whatever proactive measures a company decides to take, documentation will be key. Regulators will need proof that companies have attempted to mitigate risk.

The business risks of AI are significant and could be detrimental to companies that aren’t proactive.

  • Many companies have focused their efforts on optimizing AI for accuracy and prediction without considering risk or ethical concerns. However, after watching the fallout in Big Tech, industry leaders agree that companies need to do more, and societal impact needs to be top of mind.

  • Because AI regulation is evolving, companies should design beyond the current law. This will decrease the potential losses their business could face down the road as new or updated regulations are put in place.

  • The biggest risk for companies is reputational. The public reaction of any AI failures will not only hurt business, it will likely inform regulatory response as well.

Ethics needs to be considered across the entire AI pipeline—from development through deployment.

  • Companies that don’t prioritize ethics ahead of regulation are playing with fire. They may have the greatest AI product today, but end up defending it in court tomorrow.

  • When it comes to AI ethics, there is a lot of talk and “ethical theater,” but not a lot of follow-through.

  • Efforts are being made in academia to raise consciousness among data scientists so that ethics is considered at the design stage. The goal is to change thinking because thinking informs our technology.

  • We need to build the kind of consciousness that is required not only to do the right thing in real time, but to understand the impact of the algorithm and to undertake long-term impacts of AI. This will legitimize AI as an accurate and trustworthy technology.

  • Stakeholders need to understand the power of AI technology; while it is needed for progress, it also brings a dangerous form of privilege and prejudice. Data scientists, for example, have a lot of discretion as to what data they use, where they get that, how they are going to come up with that code, and what they are going to use to code something. Where there is discretion, there needs to be an ethical resilience that understands it’s not only about risk, it’s about doing what is right.

Looking ahead, AI regulations will likely take more of a “soft law” approach over stringent enforcement or rigorous guardrails.

  • The EU law is a good example of what a federal law could look like in the future, although that won’t likely be anytime soon.

  • Unless there is a crisis, most AI regulation will probably happen at the state-level for the foreseeable future. Regulations will likely be industry-specific.

  • Regulators can’t keep up with the pace of innovation happening in the AI sector, making it challenging to develop any specific guidance. This will leave a decent amount of leeway for practitioners.

  • In general, regulators will want companies to look at high risk-use cases, ensure that there is transparency in decision making, conduct the proper testing to confirm there isn’t bias, and monitor results on an ongoing basis. Any instances in which AI behaves unexpectedly, companies will have a mandatory obligation to report that to the government.

The US is signaling through an increased number of legislative and executive initiatives (e.g. Algorithmic Accountability ActNSCAI report) its desire to increasingly regulate the use of AI. The US Federal Trade Commission recently provided guidance on how to use artificial intelligence with an aim for “truth, fairness and equity.” It’s also critical for companies with global markets to understand the latest regulations from the European Union, that, if violated, could result in material fines and reputational damage.

The event took place on Thursday, June 17, 2021 with senior-level executives and professionals.

 
fnei-logo-purple-red.png
 

Sign up for TheBridge’s bi-weekly community newsletter, TheBridge Update, here.

If you would like to partner with TheBridge on an event, email us at hello@thebridgework.com.

TheBridge