Agendas are everywhere, disempower bad faith

It’s election season in Canada! If we hadn’t already been inundated with everything happening these last two years this would be bringing in a smorgasbord of bad faith arguments over every imaginable topic. Instead it’s the year 2021 and we’re living in a dystopia where bad faith arguments are consuming enormous amounts of energy and public discourse, so really it’s just an extra seasoning this year, but maybe a good excuse for an online journal entry.

Politicians tend to be the merchant royalty of bad faith arguments, there are examples worldwide of political decision makers that do their job pursuing honesty while striving for accuracy, and they’re amazing – but they seem to be the exception and none of them personally represent me so I’m just bitter. Observing their peers (and the comments section of… anything) did showcase one pattern: the tendency to invalidate an argument on the presumption of an agenda.

Super common in game development, as my project responsibilities increased I found myself spending more and more time in meeting rooms. Over the years I found this inefficient and problematic enough that at first opportunity at a department leadership role I explicitly re-designed expectations for all layers of that department to promote personal autonomy, accountability, and creative agency. It was win-win! All the folks reporting to me ended up being more productive, generally enjoying their work more (unless they didn’t, which is a good time for me to step in for support), and I spent less time in meetings (I wasn’t the sole representation of “Design” – the designer working on content/feature would be in the meetings and we could catch up after) allowing myself to be productive developing and not just “managing”.

Anyway, that small success story aside I have spent more time in meeting rooms than I would wish on anyone. Not all meetings are soul-draining, collaborative meetings can be energizing! Though for a successful collaborative meeting you either need everyone acting in good faith or at the very least to step aside and just listen – otherwise the meeting can be derailed very, very quickly by a loosely related (or in extreme cases a completely un-related) agenda.

One strategy I’ve used to generally combat the hijacking of meetings is to pre-empt a focus on goals – what are we trying to achieve? This strategy has had mixed success, enforcing the goals (ex: “hey that sounds cool, but it’s off-topic and we need to be working towards [goal reference] right now”) can end up in its own spin-off argument of how [agenda] fits in the goal (bad actors gonna bad act), depending on who is moderating the meeting this may or may not be an acceptable use of 5-10 peoples next 20-60 minutes. When not moderating meetings myself I made a habit of doing the math of when meetings were completely hijacked (0% productivity, no new action items other than to schedule more meetings), # of people multiplied by the number of minutes. I would then present this data to the moderator, in hindsight this may have made me unpopular at that company (like, actually – when doing ‘coaching training’ one of the company directors made a joke about how some of the other directors ostracized me, I didn’t find the joke funny).

There’s another strategy I’ve seen deployed where there may be assumptions someone has an agenda but won’t speak openly about it, those people then get excluded from the decision making process. That’s one method of disempowering an unhelpful agenda, but you may also be doing the group a disservice – just because someone may have an agenda doesn’t mean they can’t provide useful input to a conversation. In fact, sometimes a focus on an agenda (gaming example: someone may be so hyper-focused that the Player vs. Player experience fits their vision that they may neglect other facets of the game) can provide insights to help make better decisions, as they can be most deeply connected to a cohort of the community than others available.

Gate-keeping conversations is a very poor solution, people who have good intentions may act in bad faith purely out of a passion for the topic – you don’t want to cut out your passionate voices (just need to keep bad faith arguments from taking up everyone’s time!). Taking the reigns of moderation can be effective in helping keep progress towards a goal, and that’s as far as my hands-on experience goes. BUT! I have seen something really cool come out of Taiwan, the idea of leveraging tech to filter out all the noise and find consensus in a conversation. While in a real-world environment that strategy can help find consensus in a wide-array of conversations, I can imagine the same principles apply on a smaller scale (like say, game development!). Asynchronous conversations can happen over time (which works well with work-from-home mentality) and an automated service could be pulling out consensus for evaluation by decision makers – decision makers can be freed up from having to moderate conversations allowing them to get back to directly contributing. Win win?!

Maybe, it’s all a theory but the fact that the software used is open source is very promising, the mechanism for pulling out and analyzing conversation text for consensus could even be added as a plug-in for task tracking software (like Jira) or communication software (like Slack or Discord), reducing friction to the conversation contributions. Timeframes and milestones tend to be set months if not years in the future, all sources for conversation (internal to the company or even crowd-sourcing from fans) could be collated for consensus giving decision makers reliable conversation data to base their decisions on when the time comes (rather than having bad faith arguments consume disproportionate amounts of their time).

That’s the thing about bad faith arguments, they don’t need to be on topic and they tend to create more division than consensus. Someone with a goal of “winning” their argument at all costs (logic and goals out the window) can be a massively disproportionate time-sink in decision making – that time can be offloaded to this machine learning AI! Would be curious if explicitly bad faith arguments could be dumped into analytics as well to tease out agendas causing the bad behavior – maybe one day if I’m absorbed back into corporate development I’ll have a chance to find out, ha!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s