Posts

Showing posts from April, 2018

Government-Organised Non-Governmental Organisations/GONGOs: Would You Like to Know More?

Image
By Péter MARTON To answer the question in the title:* I would. Ironically, one of the best texts so far on a subject whose study originates from the Chinese context, with examples like the Human Rights Society of China in mind, is this brief paper (by Chris Carothers). A research memo, from a workshop . It is not bad for starters, though, and at least it offers a basic typology that covers "propaganda", "militant" and "development" NGOs. And some of the uses these have for autocratic and, let's say, imperfectly democratic political regimes (such as attracting funding away from genuine NGOs/civil society, or creating the semblance of mass support for government decisions, policies and entire agendas, even). This is great, but of course the political regimes in question are more creative than this. Everyone who hasn't been living under a bucket lately is probably aware of the presence of GONGOs in politics, and how that is felt in even mo

Station Eleven (the book): A Review of the Post-Apocalypse

Image
By Péter MARTON Having just finished Emily St. John Mandel's Station Eleven (the 2015 winner of the Arthur C. Clarke Award), a part post-apocalyptic, part pre-apocalyptic novel with (inevitably and yet mostly just implicitly) the apocalypse in its centre, here are a few quick notes, in praise as well as criticism – not as a literary review, but mainly in reaction to the plot: its plausibility and its implications. As a work of literature I really liked this book – I enjoyed it, even. It is moody and haunting, as many would say. All the characters want to be somewhere else, even some time else. The story is effectively a collection of their memories upon memories of times, places and faces past. What follows here, however, is the dirty work – the ugly analysis of probabilities and plausibilities that's more interesting from a social science vantage point. A raw take from my part. Feel free (or invited, even) to add to this, or to criticise any element of my ass

The AI logic bomb problem

Image
By Péter MARTON (The source of the illustration is this video .) Elon Musk brings up a familiar point about what could potentially go wrong with AI. This is not a novel argument, but it is so clearly formulated here that really everyone should understand it: "AI doesn't have to be evil to destroy humanity – if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings," Musk said. "It's just like if we're building a road and an anthill happens to be in the way, we don't hate ants, we're just building a road, and so goodbye anthill." And let's not forget that people are also perfectly capable of setting goals that result in defining other people as obstacles to be removed from the way. So it wouldn't have to be AI vs. all of humanity, even. On the other hand, if you are interested in a more enjoyable, literary take on this,