50% off all ebooks and videos - including my own Building Web Reputation Systems using the discount code: CFFLNT.
Social Media Clarity Podcast Episode List:
Leave comments here or the show at socialmediaclarity.net
Bryce Glass returns to the show to join Scott and Randy interviewing Matt Leacock: A mild mannered UX designer by day … but after hours he uses his super-powers to design award-winning boardgames. You may have played one of his most popular games; Pandemic!, Roll Through the Ages, or Forbidden Island…
Matt talks about how he takes the lessons and techniques from each discipline to improve the other.
Thomas: Hey, this is Thomas Knoll from Primeloop.com and I listen to the Social Media Clarity podcast.
Matt: When designing Pandemic I barely play-tested the heroic game, that’s the most difficult level on the game. It was really hard. I wasn’t sure I’d even be able to beat it - but I knew the internet could. We put it out there, because I knew players would find a way, and again, you need to trust your players.
Randy: Welcome to the Social Media Clarity podcast, 15 minutes of concentrated analysis and advice about social media in platform and product design.
Randy: Welcome to the Social Media Clarity podcast. I’m Randy Farmer.
Bryce: I’m Bryce Class.
Scott: I’m Scott Moore.
Matt: I’m Matt Leacock.
Scott: We have some gaming reputation news for you this episode. Microsoft recently announced that they would be notifying Xbox One players if they had been penalized for disruptive or abusive behavior through their reputation system. All Xbox One players have a reputation rating ranging from good, to needs work, to avoid me. Players will get notifications regarding changes to their reputation status and if they slip into the “avoid me” category, may have difficulty finding others to play within the Xbox One’s matchmaking service, or may lose other privileges on the system.
Conversely, Microsoft has hinted that they may reward players who maintain positive social reputations. Recently, game, news, and community site Polygon.com reported that a recent survey to Sony PlayStation 4 owners included questions about a player reputation system. Check the show notes at socialmediaclarity.net for links to articles with more information. As these systems develop, we may take a closer look at how they are performing, and if they are meeting the needs of both company and community.
Randy: Today we’ve got two special guests. Bryce Glass returns to the podcast. You may recall that Bryce was one of our original hosts. Thanks for coming back to visit the old stomping grounds Bryce.
Bryce: Thanks, it’s my pleasure. I’m really glad I could make it back.
Randy: He’s returning to help us interview our special guest today, Matt Leacock. Matt, Bryce, and I worked closely together at Yahoo where Matt was an interactive design architect for Yahoo’s social platforms and products. Now he’s chief designer at Sococo, a company creating technology to run virtual offices where he has all aspects of user experience, including interaction design, visual design, and corporate identity.
He’s also the owner of Locust Games, where he designs and develops board and card games for the international market. You may have heard of some of his award winning games, Pandemic, Roles through the Ages, and Forbidden Island. He recently released Forbidden Desert. Pandemic and Roles of the Ages were named family game of the year in 2009 and 2010 by Games Magazine. Pandemic and Forbidden Island have both defeated the players on Geek and Sundry’s TableTop Webseries.
I’ve been lucky enough to be a play tester for almost all of these awesome games. Congratulations, and we are very excited to have you with us today Matt.
Matt: Yeah, thanks Randy, thanks for having me.
Randy: Matt, could you talk a little bit about what Sococo is and what it does?
Matt: Sococo is a virtual office for you and your team. As companies get larger and more distributed, you end up joining lots of teams where you can’t get everybody in the same room all the time. We’ve created a very easy to use virtual office environment that allows you and your colleagues to get together on a virtual floor plan. You all get together and you can talk with voice, you can chat, you can screen share, you can use video, but more importantly, you can see everyone in your team on this floor plan all in the same room or you can break off into different rooms. It basically uses a visual metaphor to provide a sense of place for a distributed team.
Randy: I’d be interested in your take on how the two disciplines have informed each other, how you’ve grown as a designer and how that’s been able to inform your process when you put together a game, versus your process when you think about product design.
Matt: Sure, yeah, it’s something I mull about quite a bit, because I bounce between the two task daily doing interaction, user experience design by day, and then doing a lot of work game design in the evenings and on the weekends. I think one obvious thing is I’ve taken the toolkit of the user experience design and I’ve applied it directly to board game design.
When I started out early on, it’s funny for me to think of a career and game design. I’ve really wanted to do it since I was a kid. I started out at eight, and I was a hack. You flail and you’re not really sure what you’re doing, and you try all sorts of different things. A lot of things don’t work, but as I grew and became more experienced, as a user experience designer, I was able to apply that directly to games design. I’ve got a lot more rigor and methodology around how I approach a design.
I actually write up a creative brief. I really want to understand the audience. I take a lot of the lessons from user experience research and apply them to play testing. It’s really easy for me to think about how I’ve apply just methodology from experience design in to game design.
Bryce: I was really impressed during the development of Pandemic; I remember watching you user testing the documentation, the rules for the game, and I thought oh, what a perfectly fantastic idea.
Matt: One of the most direct things that I think get from game design that I apply back to user experience design and product design it’s just the directness and accessibility of play testing. It’s pretty easy for me to find somebody to sit down and try one of my games out, and it’s really critical as well. I have no illusions that a game will ever work unless it’s tested. It’s like a code; if you’ve written it, it’s guaranteed not to work. You have to test it, right?
The same thing with interaction design, but too many times, we pull some interaction design patterns off the shelf, we plug them in, and we say we’re done. We wash our hands of it. We close the ticket and assume that everything’s going to be great.
I have no such assumptions when it comes to board games design. Because I’m always play testing board games, I get into that rhythm and beat, and I bring a lot of that back into interaction design. It helps me bring that energy back. It helps me remember that, until I’ve got these things put in front of humans it’s not done.
That’s one of the things that I enjoy about game design is just how close you can get to players and how much you have to get inside their heads. It’s a good reminder for product design to keep that in the forefront of my process.
Randy: In 2008 you gave a tech talk at Google entitled, “Cooperation and Engagement; what can board games teach us?” in which you drew some important lessons from design of Pandemic to user interface and interaction design. It’s been six years. Both as a user interface and board game designer, what lessons still ring true from that talk?
Matt: I think all of that stuff is still legitimate, but I’d probably add a bit to it. I think one of the key things in that one was around flow and trying to keep your players in an engaged state between boredom and frustration really, and that seems to be the key thing for me as a game designer; that’s the key problem to solve for me. In to borrow from Jesse Schell, he had a wonderful book on the art of game design. You just play his games as problem solving in a playful mood.
In order to have enjoyable problems, you need to make sure they’re not too frustrating and not too boring. I think those things are directly applicable to product design, especially when you’re talking about new user experience. You need to engage someone, you need to peak their interest, and get them into a channel where they can find the next bit of information. It comes down to really good communication design as well, being able to talk about something generally, and then get a little bit more specific, and down you go.
We used to talk about this as the scent of information; always having the next breadcrumb in front of you that you can follow. That was really key thing when you’re talking about marketing and understanding how a product works and what your next steps are, and how to really engage with the product.
That was one of the lessons. Another one that was really key is - I think I used the word “embodiment”. That can mean a lot of different things but things that come to me right away are really good use of metaphor and finding ways to create component’s controls, to embody what you’re trying to communicate, things that are self-evident that you don’t have to present a wall of text in order to understand how do you either play the game, or how to use the interface.
To use the Sococo example, you’ve got an Avatar and because it’s a representation of a person, you expect you can do certain things with it. You expect that you can drag it around to move it. You expect you should be able to communicate with another person somehow, just by directly manipulating the Avatar object. The similar thing with a game pawn, if you present apon to someone, it begs that it be moved with your hand. Just trying to find the right components that both communicate a story that is easy for people to understand, and communicate their parameters to the system in which either that control or that component operate within.
Another lesson was that of simplification, and trying to make things as simple as possible, but no simpler. I told a story in Pandemic how I was on a simplification kick, and I kept removing elements just to see how simple I could make the thing, and reduce it down to its fundamental core, and make it incredibly obvious what to do; super obvious, because there were very few components, right.
I ended up simplifying a deck of cards down to the point where I realize that players were having a lot of problems with it. They were making all sorts of mistakes. They were mapping different actions improperly, and ended up having to add all the complexity back in and introducing as many as 54 new cards in order to overcome that.
When I began, I think players had a mix of components, little cubes, and they had actions. They could spend cubes or cards, or just cubes, or just cards, and it was just a confusing mess. I was able to whittle that down so that there was really only one thing that players needed to worry about on their turn and that was how to spend four simple actions. Then, did another exercise and how could I communicate the four available actions as crisply and concisely as possible, and how could I provide a player aid in order to provide iconography and very direct statements, so that people could understand what they could do on their turn very directly.
A key part of design is throwing away everything that’s extraneous or that gets in the way.
Randy: Sometimes simplification adds components.
Matt: Yeah, that’s right, and in that case, it was 54 cards, which is a non-trivial expense. When I did add that that would open up all sorts of other doors for adding to the story, and to add a new clock mechanism, and it just unlocked all sorts of things. Yeah, it was simpler by adding components.
Svott: What lessons can we take from cooperative game design and apply then to social interfaces?
Matt: One thing, when you’re designing a cooperative game, is that you really need to trust your players. I see some people designing games and they put a helpful play tips in the end, where they basically ruin the game and they explain how to win the game. I here this from my daughter; “Don’t tell me how to do it!”, right. She wants to do it for herself. She wants to understand, and I think that’s similar in a cooperative game. You’re presenting a puzzle, and so to present the answer to the puzzle or tips on how to solve it is counterproductive. You need to trust your players that they’re going to figure out how to play it.
Actually, when designing Pandemic, I barely play-tested the heroic game, that’s the most difficult level on the game. It was really hard. I wasn’t sure I’d even be able to beat it but I knew the internet could. We put it out there. I did not test that a whole lot. I play tested the basic and normal levels quite a bit, but not the heroic level, because I knew players would find a way . Again, you need to trust your players.
In designing Sococo, we need to define ways to allow people to work together fluidly, flexibly, and we needed to trust them with degrees of freedom. Our basic user principle is just freedom, a locus of control; basically let the user do what they want to do. Don’t put endless supply of dialog boxes in front of them warning them not to do a certain thing. If they want to erase their hard drive at a certain point, you have to let them erase their hard drive.
In social space, this can be a little scary because you can have these interactions that can be questionable. The thing is you don’t want necessarily control that with software. No amount of software is going to prevent your colleague from being a dick, right? Basically, what we do with a lot of interactions in Sococo is just provide appropriate transparency to ensure that social norms take over, rather than trying to build in all these turn-taking mechanisms and checks and OKs, and this and that.
If you want to tear down what your boss is sharing in a large board meeting, and show pictures of your kids, you can do that, but everybody’s going to see it; everybody’s going to see that you did it, and so people don’t do it. We don’t have passin and control, for example, in Sococo. We trust that people in this controlled environment with people they know are going to behave accordingly because we’re providing the appropriate transparency.
If you want to kick someone out of your meeting when they’re talking, you can do that, but it’s going to tell you, and it’s going to tell everybody else in the meeting that you did it. Again, it comes back to trust. It’s an important lesson, I think, that spans both for game design and experience design.
Randy: Once again, I’d like to thank Matt and Bryce for joining us today.
Scott: Thanks a lot.
Bryce: Thanks a lot Randy; it’s been fun.
Matt: Thanks Randy, it was great to be here.
Scott: Our tip for you is to play Pandemic, to get a feel for the elements of cooperative play, how simple elements can ramp up the difficulty from beginning to challenging play, and introduce new opportunities for play. If you are unable to obtain a copy of Pandemic or get to a local game night where someone can share a copy with you, the next best thing is to at least look at the rules which are available for free on line, and check out both the video introduction to the game and a video of a full game being played through.
Links to Pandemic rules and play videos, as well as Matt’s 2008 Google tech talk are in our show notes, hat tips to publisher Zman games and Spydah666, for the Pandemic rules, and play through videos.
Marc: If you find our podcast valuable, please help others find us by subscribing to us on iTunes or Stitcher, liking us oh Facebook, and sharing podcast links through your own social networks.
Randy: For links, transcripts and more episodes from the socialmediaclarity.net thanks for listening.
Marc, Scott, and Randy discuss LinkIn’s so-called SWAM (Site Wide Automatic Moderation) policy and Scott provides some tips on moderation system design…
[There is no news this week in order to dig a little deeper into the nature of moderating the commons (aka groups).]
Block & Delete takes the member out of the group and places them on the Blocked tab, which prevents them from requesting to join again. It also deletes all past contributions. Please be aware that when you select Block & Delete for a group member, this will result in automatic moderation of all their future posts in any group site-wide. Read more about removing spam from your group. (emphasis ed.)
3/11/14: Note: The mechanism that changes a member’s posting permissions is automated and cannot be reversed by LinkedIn Customer Support. We cannot provide a list of which groups blocked a member due to privacy restrictions. (emphasis ed.)
John Marc Troyer: Hi, this is John Mark Troyer from VMware, and I’m listening to the Social Media Clarity podcast.
Randy: Welcome to episode 14 of the Social Media Clarity podcast. I’m Randy Farmer.
Scott: I’m Scott Moore.
Marc: I’m Marc Smith.
Marc: Increasingly, we’re living our lives on social-media platforms in the cloud, and in order to protect themselves, these services are deploying moderation systems, regulations, tools to control spammers and abusive language. These tools are important, but sometimes the design of these tools have unintended consequences. We’re going to explore today some choices made by the people at LinkedIn in their Site Wide Automatic Moderation system known as SWAM. The details of this service are interesting, and they have remarkable consequences, so we’re going to dig into it as an example of the kinds of choices and services that are already cropping up on all sorts of sites, but this one’s particularly interesting because the consequence of losing access to LinkedIn could be quite serious. It’s a very professional site.
Scott: SWAM is the unofficial acronym for Site Wide Automated Moderation, and it’s been active on LinkedIn for about a year now. Its intent is to reduce spam and other kinds of harassment in LinkedIn groups. It’s triggered by a group owner or a group moderator removing the member or blocking the member from the group. The impact that it has is that it becomes site wide. If somebody is blocked in one group, then they are put into what’s called moderation in all groups. That means that your posts do not automatically show up when you post, but they go into a moderation queue and have to be approved before the rest of the group can see them.
Randy: Just so I’m clear, being flagged in one group means that none of your posts will appear in any other LinkedIn group without explicit approval from the moderator. Is that correct?
Scott: That’s true. Without the explicit approval of the group that you’re posting to, your posts will not be seen.
Randy: That’s interesting. This reminds me of the Scarlet Letter from American Puritan history. When someone was accused of a crime, specifically adultery, they would be branded so that everyone could tell. Regardless of whether or not they were a party to the adultery, a victim, you were cast out, and this puts a kind of cast-out mechanism, but unlike then, which was an explicit action that the community all knew about, a moderator on a LinkedIn group could do this by accident.
Scott: From a Forbes article in February, someone related the story that they had joined a LinkedIn group that was for women, and despite it having a male group owner and not explicitly stating that the group was for women only. The practice was that if men joined the group and posted, the owner would simply flag the post just as a way of keeping it to being a woman-only group. Well, this has the impact that simply because the rules were not clear and the behavior was not explicit, then this person was basically put into moderation for making pretty much an honest mistake.
Randy: And this person was a member of multiple groups and now their posts would no longer automatically appear. In fact, there’s no way to globally turn this off, to undo the damage that was done, so now we have a Scarlet Letter and a non-existent appeals process, and this is all presumably to prevent spam.
Scott: Yeah, supposedly.
Randy: So it has been a year. Has there been any response to the outcry? Have there been any changes?
Scott: Yes. It seems that LinkedIn is taking a review. They’ve made a few minor changes. The first notable one is that moderation is temporary, so it can last a undetermined amount of time up to a few weeks. The second one is that it seems that they’ve actually expanded how you can get flagged to include any post, contribution, comments that are marked as spam or flagged as not being relevant to the group.
Randy: That’s pretty amazing. First of all, shortening the time frame doesn’t really do anything. You’re still stuck with a Scarlet Letter, only it fades over months.
Marc: So there’s a tension here. System administrators want to create code that essentially is a form of law. They want to legislate a certain kind of behavior, and they want to reduce the cost of people who violate that behavior, and that seems sensible. I think what we’re exploring here is unintended consequences and the fact that the design of these systems seem to lack some of the features that previous physical world or legal relationships have had, that you get to know something about your accuser. You get to see some of the evidence against you. You get to appeal. All of these are expensive, and I note that LinkedIn will not tell you who or which group caused you to fall into the moderation status. They feel that there are privacy considerations there. It is a very different legal regime, and it’s being imposed in code.
Randy: Yes. What’s really a shame is they are trying to innovate here, where in fact there are best practices that avoid these problems. The first order of best practice is to evaluate content, not users. What they should be focusing on is spam detection and behavior modification. Banning or placing into moderation, what they’re doing, does neither. It certainly catches a certain class of spammer, but, in fact, the spam itself gets caught by the reporting. Suspending someone automatically from the group they’re in or putting them into auto-moderation for that group if they’re a spammer should work fine.
Also, doing traffic analysis on this happening in multiple groups in a short period of time is a great way to identify a spammer and to deal with them, but what you don’t need to do is involve volunteer moderators in cleaning up the exceptions. They can still get rid of the spammers without involving moderators handling the appeals because, in effect, there is an appeals process. You appeal to every single other group you’re in, which is really absurd because you’ve not done anything wrong there - you may be a heavy contributor there. We’ve done this numerous places: I’ve mentioned before on the podcast my book Building Web Reputation Systems. Chapter 10 describes how we eliminated spam from Yahoo Groups without banning anyone.
Marc: I would point us to the work of Elinor Ostrom, an economist and social theorist, who explored the ways that groups of people can manage each other’s behavior without necessarily imposing draconian rules. Interestingly, she came up with eight basic rules for managing the commons, which I think is a good metaphor for what these LinkedIn discussion groups are.
Randy: Well, and that’s section four.
Randy: I would even refine that a little bit online, which is to not only monitor, but to help shape members’ behavior so that people are helping people conform to their community.
Randy: Specifically on dispute resolution, which includes an appeals process, for Yahoo Answers, we implemented one which was almost 100% reliable in discovering who a spammer was. If someone had a post hidden, an email would be sent to the registered email address saying, “Your post has been hidden,” and takes you through the process for potentially appealing. Now, what was interesting is if the email arrived at a real human being, it was an opportunity to help them improve their behavior. If they could edit, they could repost.
For example, this is what we do at Discourse.org if you get one of these warnings. You are actually allowed to edit the offensive post and repost it with no penalties. The idea is to improve the quality of the interaction. It turns out that all spammers, to a first approximation on Yahoo Answers, had bogus email addresses, so the appeal would never be processed and the object would stay hidden.
Randy: And it also says from the bottom up. I actually approve of users marking postings as spam and having that content hidden and moving some reputation around. Where we run into trouble is when that signal is amplified by moving it up the interconnected system and then re-propagated across the system. The only people who have to know whether or not someone’s a spammer is the company LinkedIn. No other moderator needs to know. Either the content is good or it’s not good.
Marc: Elinor Ostrom’s work is really exciting, and she certainly deserved the Nobel Prize for it because she really is the empirical answer to that belief that anything that is owned by all is valued by none. That’s a phrase that leads people to dismiss the idea of a commons, to believe that it’s not possible to ethically and efficiently steward something that’s actually open, public, a common resource, and of course, the internet is filled with these common resources. Wikipedia is a common resource. A message board is a common resource.
Like the commonses that Ostrom studied, a lot of them are subject to abuse, but what Ostrom found was that there were institutions that made certain kinds of commons relationships more resilient in the face of abuse, and she enumerated eight of them. I think the real message is that, given an opportunity, people can collectively manage valuable resources and give themselves better resources as a result by effectively managing the inevitable deviance, the marginal cases where people are trying to make trouble, but most people are good.
Scott: Your tips for this episode are aimed at community designers and developers who are building platforms that allow users to form their own groups.
Randy: That was a great discussion. We’d like the people at LinkedIn to know that we’re all available as consultants if you need help with any of these problems.
Marc: Yeah, we’ll fix that for you.
Randy: We’ll sign off for now. Catch you guys later. Bye.
Social Media Clarity Podcast Episode List:
Leave comments here or the show at socialmediaclarity.net
50% off all ebooks and videos - including my own Building Web Reputation Systems using the discount code: CFFLNT.