ANTI FAKE NEWS Ofiţerul şef de securitate al Facebook s-a dezlănţuit pe Twitter împotriva criticilor: Este foarte dificil să identifici ştirile false şi propaganda folosind doar algoritmi; cine doreşte asta nu înţelege abuzul de sistem
Alex Stamos este vizibil iritat de comentariile unor jurnalişti care au criticat faptul că Facebook va modera manual anunțurile sponsorizate bazate pe criterii de „politică, religie, etnie sau probleme sociale” înainte de a intra live în feed-ul utilizatorilor.
Pornind de la acest comentariu, ofiţerul şef de securitate al Facebook s-a dezlănţuit pe Twitter cu o serie de 18 mesaje de până la 140 de caractere în care a atacat lipsa de documentare a jurnaliştilor în ceea ce priveşte utilizarea tehnologiei, a inteligenţei artificiale şi a automatizărilor pentru identificarea ştirilor false şi a propagandei, pe baza datelor de instruire părtinitoare ideologic.
De asemenea, Alex Stamos spune că oamenii nu sunt conştienţi de ceea ce solicită specialiştilor din Silicon Valley şi se teme că într-o zi dorinţele acestora se vor întoarce ca o pedeapsă împotriva lor. „When the gods wish to punish us they answer our prayers.”
I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos.
Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks.
In fact, an understanding of the risks of machine learning (ML) drives small-c conservatism in solving some issues.
For example, lots of journalists have celebrated academics who have made wild claims of how easy it is to spot fake news and propaganda.
Without considering the downside of training ML systems to classify something as fake based upon ideologically biased training data.
A bunch of the public research really comes down to the feedback loop of „we believe this viewpoint is being pushed by bots” -> ML
So if you don’t worry about becoming the Ministry of Truth with ML systems trained on your personal biases, then it’s easy!
Likewise all the stories about „The Algorithm”. In any situation where millions/billions/tens of Bs of items need to be sorted, need algos
My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.
And to be careful of their own biases when making leaps of judgment between facts.
If your piece ties together bad guys abusing platforms, algorithms and the Manifestbro into one grand theory of SV, then you might be biased
If your piece assumes that a problem hasn’t been addressed because everybody at these companies is a nerd, you are incorrect.
If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful. Really common.
If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad
Likewise if your call for data to be protected from governments is based upon who the person being protected is.
A lot of people aren’t thinking hard about the world they are asking SV to build. When the gods wish to punish us they answer our prayers.
Anyway, just a Saturday morning thought on how we can better discuss this. Off to Home Depot. FIN
Facebook încearcă să-și dea seama cum poate să monitorizeze utilizarea sistemului său fără a cenzura idei şi pentru acest scop a suplimentat forţa de muncă cu 1.000 de noi angajaţi pentru moderarea anunţurilor sponsorizate. Decizia vine după ce s-a dovedit că guvernul rus a folosit conturi false pentru a răspândi discordii politice în SUA înainte de alegeri.
Știri Resurse Alex Stamos Facebook fake news front jurnalism machine learning media ML propaganda technology