Designing Pro-social Networks through Multi-Agent Learning Theory

Artificial intelligence (AI) and algorithmic decision-making play a large and increasing role in how we conduct economic activity and organize our societies. We use such algorithms to allocate public funds, discover and share information, hire employees, approve mortgages, and a variety of other important tasks. However, despite the fact that these algorithms and the people that use them usually don't act in isolation, most work in AI does little to examine the precise mechanics of how people and algorithms interact, adapt and mutually influence each other through economic, physical, social, and other ties. Over the past decade, social networks and large internet platforms have amassed an unprecedented level of social and political importance. Social networks are the primary means by which much of the world obtains information and communicates with others. Thus they play an essential role in people's access to information, exposure to ideas, community building, mental health, and participation in democracy. The ability for platforms to provide us with content that is perfectly tailored to our specific interests, no matter how niche, has been transformative, allowing for both self-discovery and for connections to forge between individuals who share the same interests and life circumstances. Yet, the boundaries for incentives for social network companies remain unclear, as profit stems from engagement at scale, the economic motivations of these platforms have naturally led them serve individuals with the exact content that they will find most addictive which is not necessarily the healthiest content. This has led to a widescale harm by enabling polarization, online harassment, and mis/disinformation. It is important that we study how social networks can evolve to mitigate harm and support ``pro-social" goals and how best to align these goals with the incentives of the platforms.

Researchers

Overview

On social networks, algorithmic personalization drives users into filter bubbles where they rarely see content that deviates from their interests. We present a model for content curation and personalization that avoids filter bubbles, along with algorithmic guarantees and nearly matching lower bounds. In our model, the platform interacts with n users over T timesteps, choosing content for each user from k categories. The platform receives stochastic rewards as in a multi-arm bandit. To avoid filter bubbles, we draw on the intuition that if some users are shown some category of content, then all users should see at least a small amount of that content. We first analyze a naive formalization of this intuition and show it has unintended consequences: it leads to “tyranny of the majority” with the burden of diversification borne disproportionately by those with minority interests. This leads us to our model which distributes this burden more equitably. We require that the probability any user is shown a particular type of content is at least γ times the average probability all users are shown that type of content. Full personalization corresponds to γ = 0 and complete homogenization corresponds to γ = 1; hence, γ encodes a hard cap on the level of personalization. We also analyze additional formulations where the platform can exceed its cap but pays a penalty proportional to its constraint violation. We provide algorithmic guarantees for optimizing recommendations subject to these constraints. These include nearly matching upper and lower bounds for the entire range of γ ∈ [0, 1] showing that the cumulative reward of a multi-agent variant of the Upper-Confidence-Bound algorithm is nearly optimal. Using real-world preference data, we empirically verify that under our model, users share the burden of diversification and experience only minor utility loss when recommended more diversified content.