Skip to content

Social Architecting and the Narrows

March 3, 2011

As of a few weeks ago, the Web Ecology Project concluded an experiment called Socialbots, essentially an event in which three teams competed to program bots to enter and influence large groups of users on Twitter over a two week, all-out battle of automated social shaping. We’re still sifting through all the data generated, though the code from the competition has been made available open source, and we’ve been talking about the project to a variety of different folks — trying to think through the implications of what’s come out of it all. But, wanted to drop out this post talking about how it all turned out, and addressing the next project that we’re planning to take on given those results (and looking for people who might be interested in collaborating with us).

The results: tremendously exciting! With only two weeks of coding time, the three competing teams were able to develop bots that, even following rudimentary patterns of behavior, were able to elicit an enormous amount of activity in the social cluster. The winning team alone built 107 mutual connections between its bot and the targets, and elicited close to 200 responses (@ replies, RTs, etc). In all, the bots collectively generated close to 250 responses, and received mutual connections from close to half of the entire target set. And the bots also had a strong effect on the topology of the social graph as well — in the two weeks, the bots were able to heavily shape and distort the structure of the network. This included bringing people together not originally connected, and bringing together a community of activity around the bots themselves (the picture above was the final end-state network graph in the game).

The ultimate “so-what” of this? Beyond just competitions, it opens the possibility of building a class of technologies that could be used to do targeted social shaping on a very large scale. Essentially, Socialbots demonstrates that proof of concept. To that end, swarms of bots with statistically-predictable social outcomes could be built and used to actively sculpt and rewire the connections of social groups online consisting of thousands (or perhaps hundreds of thousands of users).

So we’re setting our sights a bit higher now. What we’re working on now is the “The Narrows,” the first ever robo-constructed social superstructure leveraging and extending the technology from Socialbots to really engage in building mega-scale community architecture. That’s pretty abstract, but the idea behind the project is simple and concrete: we’re going to survey and identify two sites of 5,000-person unconnected Twitter communities, and over a six-to-twelve month period use waves of bots to thread and rivet those clusters together into a directly connected social bridge between those two formerly independent groups. The bot-driven social “scaffolding” will then be dropped away, completing the bridge, with swarms of bots being launched to maintain the superstructure as needed.

In any case, we’re getting a team together to embark on this project and form a social architecture team, the first construction project of a group that we’re calling “Pacific Social.” If you’re interested in playing a role, drop me a line at!


The Standing Committee on the Robotic Corporation

September 9, 2010

Today, excited to announce that I’ll be chairing the newly forged Web Ecology Standing Committee on the Robotic Corporation (SCRC). The concept behind the committee springs from Web Ecology Camp and various discussions that have sprung forth with the release of PawnFarm — which allows users to pump out their own Amazon Mechanical Turk driven robots to systematically crawl through Twitter and socialize with unsuspecting people.

Specifically, we’ve been struck by a tantalizing idea. Which is that, at its heart, the social nature of the web allows robots to shape, manage, and direct human behavior on a broad new scale.

So, could a script interfaced with the multitude of online labor markets out there — Elance, AMT, ODesk — to power and coordinate everything from end-to-end to spin up a fully functioning corporation? Suppose that somewhere on the internet, a script puts out a job and hires someone to come up with a business plan, then farms out that business plan to be broken into small, distributable steps, and then hires people to deploy the project? Then, after the infrastructure is humming along, hires a group to staff that project?

What if these business ideas became not just sustainable, but highly profitable? Could the robot corporation diversify into investments? Venture funding? Storefronts? What’s the technology that would undergird such an entity? What’s the simplest set of rules for a program you could launch to accomplish this? How might one construct a business or organization that is computer managed, but human driven?

This is the subject of the SCRC.

Like the Web Ecology Camps, the idea will be to meet quarterly to develop out various aspects of the idea, share projects, and eventually move to prototyping/implementing projects experimenting in this vein. At least initially, we’re keeping fellowships on the SCRC capped to keep the discussions intimate, but drop a line if you’re interested in participating!

The first plenary session of the Committee will be held on October 16th, 2010 in San Francisco, CA. Interested parties should drop a line to

The Gallery of the Pop Culture Megamix

June 10, 2010

For the past few months, I’ve been hungrily scouring the web in an effort to find and collect that most breathtaking of obsessive compulsive video montage masterpieces: the pop culture megamix.

Lovingly dedicated to curating and documenting every tiny, weird repetitive tic of celebrities or Hollywood plot device, the megamix genre is a thing of beauty. Like watching glitches in some bizarre pop culture Matrix, it’s unsettling — but really in only the best possible way.

In any case, if you’ve spent any amount of time online, you’ve no doubt seen some of these and heard about others. In my opinion, it’s totally worth going back and experiencing them all (and catching up on the ones you haven’t). I’ve collected all the best I could find and included them all below.

So, enjoy. If I’m missing any that you think should be included, absolutely drop a comment!

(Obscenely) more videos, after the jump…

Read more…

Das Zuck-ital: The Economics of Social Networks and the Collapse of Privacy

June 4, 2010

After the wave of anger following instant personalization, Facebook has since issued an odd, self-justifying blog post and withdrawn to a kinda-sorta seeming compromise on its privacy policy. But generally the lingering question seems to be not if, but when the next attempt by a fallen social media darling to violate privacy goes down.

But even beyond this question, I think there’s an interesting deeper issue lurking here behind all the media hubblaboo of the Facepalm saga. A question that has more to do with the entire ecosystem of social media generally, rather than the specifics of who or what is going to pose a threat to personal information. Namely:

Is Facebook just an isolated case of a company gone wrong? Or are increasing violations of privacy just the typical behavior of a mature for-profit social network?

And, more generally, are privacy violations on the part of traditional social networks just a unique case of social media gone wrong? Or are they just the first in a broad and growing trend among all social media services (thinking here of microblogging, etc etc)?

The key here, might be to examine the rawest motivation of these companies, which, like all companies, is simply the need to stay solvent. And, if advertising revenue is the cornerstone of the business model for social networks, then one of the fundamental engines determining their activity is a simple formula — the formula for advertising revenue:

(Total Impressions of an Ad) X (Clickthrough Rate of the Ad) X (Cost Per Click) = Total Advertising Revenue

From this formula, I’ve been thinking that there’s a way to derive a pretty neat little economic model. A model which interestingly seems to suggest that as businesses based primarily on the revenue from online advertising continue to expand, they inevitably will try more and more intrusive strategies to acquire data about users. In short, that invasions of privacy are just part and parcel of the mature behavior of a certain type of business that makes online advertising a cornerstone. What’s worrisome, of course, is just how many “businesses that make online advertising a cornerstone” implies in the web startup world.

This argument why is a bit involved, so it’s worth going through it step by step (geekery follows).
Read more…

A Post About “Avatar”

December 21, 2009

While I agree that the captioning for James Cameron’s new flick “Avatar” could have settled for something more awesome than Papyrus as its font (especially considering they shelled out the money for a linguist to create a totally new language and a new camera just to film the damn thing), Keith Hopper rightly points out that there are way, way worse things it could have been.

He is right.

Mapping Out The Space: “Zittrainism” and More

December 8, 2009

Image from an emerging conversation with Graham Webster about how to start to map out the intellectual space about the internet beyond the Berkman School by using the old school polysci trick of putting everything into a 2×2 grid. Here, we’re varying the first two pillars/assumptions of the Berkman School, holding all else constant. For the first assumption, we vary whether or not the group of assumptions has relatively greater faith and emphasis on users or institutions in shaping the web. For the second, we vary whether or not the group of assumptions places importance on “The Internet” as a particular set of features and characteristics, or is more agnostic between various forms for different purposes.

Doing so seems to make a neato variety of positions fall out.

On The Berkman School of Thought

December 3, 2009

I’ve been enthralled lately reading the amazing bit of scholarship that is Randall Collins’ The Sociology of Philosophies. The big idea of the massive 900-page something or another tome, which is pretty intuitive but amazing to see played out across a huge swath of historical research, is that intellectual thought is primarily the product of social processes. To that end, he argues, you can track the course of a school or frame of thinking by closely examining who scholars and intellectuals hang out with and associated themselves with through history. There’s some neat things in there that he argues about the behavior of growing or failing schools of thought, and it’s all pretty great. Collins’ focus is on traditions in philosophical thinking, but I’ve been thinking alot about how this might apply to other fields as well, particularly to scholarship and popular discussion about the internet that’s emerged in the past two decades.

Obviously, the Berkman Center for Internet and Society at Harvard, and its sprawling list of digerati that have passed through there over the past ten years and change, is a nice place to start such a discussion. Much of the conversation and scholarship happening there has influenced a great deal of the popular rhetoric around the web in the past decade, and looks to continue to for the foreseeable future.

So, the question: if there is one at all, what constitutes the Berkman School of Thought? What are the underlying assumptions unfolding and undergirding the community of thinkers that have surrounded the Center?

Read more…

  • Twitter Updates

    Error: Twitter did not respond. Please wait a few minutes and refresh this page.

  • Elsewhere!

  • Pages

  • June 2018
    M T W T F S S
    « Apr    
  • Archives

  • Meta

  • Advertisements