Oct 13, 2023
REPORT: Tech Industry Is Funding AI Advisers To Take Over Capitol Hill
A dozen AI fellows in key congressional offices are being funded by Open Philanthropy, a group backed by Silicon Valley billionaires.
- 10 minutes
A new report from Politico
dives into the inner
workings of a billionaire-backed
network of AI fellows
in key congressional offices that
are helping to shape AI policy.
Now, the key player in this is
an organization called Open Philanthropy.
They're self-described philanthropic
funder whose mission is to help,
[00:00:19]
according to them, help others as much as
we can with the resources available to us.
So little bit from that excellent
reporting from Politico.
The fellows funded by Open Philanthropy,
which is financed primarily by
billionaire Facebook co-founder and
[00:00:34]
Asana CEO Dustin Moskowitz and his wife
Cari Tuna, are already involved in
negotiations that will shape Capitol
Hill's accelerating plans to regulate AI.
Now, they've been acting through
the Horizon Institute for
Public Service, which is a nonprofit
that Open Philanthropy has
[00:00:53]
effectively created in 2022.
Now, the group is funding the salaries of
tech fellows in some key senate offices.
So a little bit more on that.
Current and former Horizon AI fellows
with salaries funded by Open Philanthropy
are now working at
the Department of Defense,
[00:01:09]
the Department of Homeland Security,
and the State Department,
as well as in the House Science Committee
and Senate Commerce Committee,
two crucial bodies in
the development of AI rules.
They also populate key think
tanks shaping AI policy,
including the RAND Corporation and
Georgetown University's Center for
[00:01:26]
Security and Emerging Technology,
according to the Horizon website.
In 2022, Open Philanthropy set
aside nearly $3 million to pay for
what ultimately became the initial
cohort of Horizon fellows.
So now, Open Philanthropy has attempted
to convince people that they're
[00:01:45]
entirely separate from Horizon, but
it's very obvious that the organization
is just using Horizon as an attempt to
mask the fellows' ties to the company.
Now, because Horizon is
considered a nonprofit,
the money that Open Philanthropy has spent
on Horizon fellows since its initial grant
[00:02:02]
is something that we can't know.
Now, this opens the door to
multiple conflicts of interest.
A little bit more on that.
Tim Stretton, director of
the congressional oversight initiative at
the Project On Government Oversight,
said congressional fellows should not be
allowed to work on issues where their
[00:02:19]
funding organization has specific
policy interests at play.
He added that fellows should
not draft legislation or
educate lawmakers on topics where
their backers conceivably stand to
gain a dynamic apparently at play in
the case of Horizon's fellowship program,
given Open Philanthropy's
ties to OpenAI and Anthropic.
[00:02:37]
Now, when asked about the ethical and
conflict of interest issue,
Horizon co-founder and
executive director, Remco Zwetsloot said
the fellowship program is not for
the pursuit of particular policy goals,
[00:02:53]
does not screen applicants for
belief in long-term AI risks, and
includes fellows with a diverse set
of views on AI's existential dangers.
And we're talking about AI, so
this might be a little bit complicated.
So I'll just sort of make this analogy
that the Politico article made when
[00:03:11]
speaking this, what I think quite
succinctly, they likened it to.
If APAC had a nonprofit wing of
its organization that was funding
Middle Eastern foreign policy
fellows in congressional offices,
[00:03:26]
that would be clearly
a conflict of interest.
But that is precisely
what is happening here.
So I'll open it to you guys.
Now, we got a little bit more,
but for your thoughts on this.
>> Speaker 2: All right, thank you for
running the story because it gives me
[00:03:42]
an opportunity to talk about
what's actually going on with AI.
So these are comments I've made
multiple times on the Young Turks and
other networks.
The AI systems that generally are being
spoken about today are not traditional AI
systems.
They're using our own personal data
collected by large server farms to mine
[00:04:00]
and process that data because the storage
and processing of data has become
exponentially cheaper over time,
which is an amazing technical development.
They're using all of our personal
data to basically create systems that
can emulate human behavior,
that can mimic human behavior.
[00:04:18]
So that's just what that kind of said.
Anytime you see words, especially
in relation to tech, like open, or
philanthropy, or innovation, or
disruption, that's a sign usually
that they're trying to brand you out and
bait and switch you, right?
[00:04:36]
So the term open is like
the gospel in tech worlds,
which I come out of in some ways.
OpenAI is not open because they don't
disclose what data is being gathered about
us or what they're doing with that data,
how long they're retaining it for, and
how their business models associatedly.
[00:04:56]
Similarly here, okay, so
what that is doing is actually masking
the actual tasks that need to be
done to actually regulate AI so
it serves people, laborers,
our planet, and so on.
And the other thing that they're doing and
this story alludes to it is, it's a common
[00:05:16]
thing we've seen, especially with
the emergence of GPT and OpenAI and so on.
We're the only ones who can regulate these
things because we're the only ones who
know what's actually going on.
Never mind that we call ourselves open.
We're the only ones who
actually know what's going on.
We, being tech, CEOs,
investors, those people.
[00:05:35]
And that's because intentionally,
these systems are built in opaque
ways based on the surveillance and
capture of personal data.
They have intentionally built
systems that order the world for
us in ways that 99.9% of us have no
visibility or accountability over.
[00:05:53]
So, that's why you hear about
the terminator scenario,
which is there's an existential risk.
That's why you hear about,
we're the ones best left to
regulate all of this ourselves.
That all masks the realities of
what these AI systems are doing,
[00:06:08]
which is threatening workers at a time,
where thankfully,
we have more unionization energy than
recent history from what I can remember.
They mask the racist and incarcerating
aspects of these technologies.
They're being used in prisons,
police departments, and so on.
[00:06:26]
They're misidentifying people who are as
criminals, who are not criminals,
because they're merely black.
These are all examples of what we
could call algorithmic violence or
AI-level violence.
So, every time one of these stories
like this drops, and I'm so
glad the Young Turks
covers stories like this,
[00:06:43]
we have to actually look at beneath the
veneer and all the branding, like fancy
branding language that tech companies
always use, what is actually at stake.
It's a threat to the working class and
a threat to our own personal data.
Instead, you can imagine how machine
learning systems, including large language
[00:07:00]
model systems like ChatGPT,
could actually serve everybody.
That would take actual regulation because
that would involve people who actually
care about the working class, about
citizens, about democracy, and so on.
The main existential risk is the
possibility of building an AI system that
[00:07:16]
can do mass-scale disinformation.
And that is a realistic scenario, but
that's kind of already happening,
by the way.
So I think the main thing here to
recognize is they are shielding themselves
from actual regulation.
So what if the National Labor Relations
Board was the actual regulator of these
[00:07:32]
systems?
What if the FDC was the actual agent that
could actually change these systems?
I think there's a lot of possibility
to transform the relationship between
systems of all kinds and our democracy and
our economy to support everybody.
[00:07:48]
But that is absolutely
not what's happening.
So just don't believe the hype.
Don't believe the hype when it comes to
what tech companies always push out upon
us, because they're not tech companies.
They're the wealthiest and
most powerful companies in many
cases in the history of the world.
Many of them are military contractors,
too.
[00:08:05]
And that's very worth noting.
>> Speaker 3: Yeah, that last part
especially is deeply, deeply concerning.
Ramesh knows this pretty much
as well as anybody does.
So I'll just comment on, look, even if
it's philanthropy or it's a nonprofit,
first of all,
a lot of things are disguised as
nonprofits that are nefarious.
[00:08:23]
But even if they meant well,
you cannot allow outside forces
to embed inside the government.
As Rayyvana mentioned,
APAC's theoretically a nonprofit, right?
But they are for
the benefit of a different country.
[00:08:40]
But you could pick anything.
Saudi Arabia has tons of nonprofits.
Are we going to let them embed
inside the United States government?
That's crazy, right?
And so I'm worried about the wholesale
takeover of our government.
And there's another thing that is,
in a sense,
[00:08:58]
AI that has already taken
over the government.
It's called corporations.
So we created corporations
like we created AI, and
we wrote the code for corporations.
And the code we wrote
was disastrously wrong.
We wrote only one line of code,
maximize profit.
[00:09:16]
And when you do that, and
you let that AI machine run wild,
it maximizes profit at the cost
of everything in its path,
including us, the humans that
created those corporations.
And what did they do?
[00:09:31]
They eventually figured out
how to buy the Supreme Court.
And this is exactly the core of our
problem that I explained in the book
that I just wrote, Justice Is Coming.
And after they'd taken over the Supreme
Court, the Supreme Court allowed for
bribery, which then allowed them to
take over all of the rest of government.
[00:09:50]
Now, the embeds they have in our
government are the senators and
representatives themselves.
So now here comes new
embeds within those embeds.
But the only people not
represented in all of this is us,
the actual human beings that this
democracy is supposed to serve.
[00:10:07]
There's something deeply
wrong in Washington.
We have to take our government back from
all of these things that we created that
have now run amok and are now controlling
us rather than the other way around,
which is how it was supposed to be.
Now Playing (Clips)
Episode
Podcast
The Young Turks: October 13, 2023
- 10 minutes
- 10 minutes
- 29 minutes
- 10 minutes
- 22 minutes
- 15 minutes
- 8 minutes