Icon
Pith
A place for discussions
93 entries
First: Oct 2, 2020
2 contributors
markz
Love this idea
christian
I think I phrased this poorly. The goal ...
christian
It's okay so far. I think making project...
christian
Thanks for the refs @abu. I have not see...
internetvin
Oh man this is incredible analysis. I ne...
internetvin
This is super cool
internetvin
hahah
christian
Hmm actually now that I think about it t...
christian
Gets rid of the virtual DOM, which means...
I've been working on this exact same thi...
internetvin
Pretty cool to see this developing. Seem...
internetvin
I really enjoyed reading this and in gen...
internetvin
Coming along well. I'd love to play with...
internetvin
I love it man
internetvin
This is looking very smooth man
internetvin
It seems like you guys are making a ton ...
internetvin
super tight! Reminds of me of IRC in the...
internetvin
This is going to be great man!
internetvin
@christian yeah I can totally relate to ...
christian
@meng_luo I was surprised to learn that ...
christian
@internetvin hey thanks, really apprecia...
internetvin
Really enjoying your experimentation so ...
internetvin
wow this is so clean !
Transition to Svelte

Here's the latest blog post which includes wonderful diagrams made by @christian: https://why.pith.is/posts/meta-cognition-2.

Board, Units, and Views

The large document (board) can be imagined as a cross between a wiki and a discussion forum. Its basic building block is the unit, which contains a pith: textual content, such as a section heading, support, idea. The board is ultimately a network of such units, of ideas, linked together.

A large group of people use the board to reflect the state of their large discussion on some theme (e.g. climate change solutions). People in the group can form chat-like discussions with each other based on the current state of the discussion. For example, they can initiate a small discussion on a unit and welcome others to join. Maybe over the course of the discussion, they meet several people they hope to talk to further. They can also create discussions with select people. They can also create discussions with no defined, initial purpose and just talk to others. Perhaps over the course of such a discussion, they will be exposed to a lot of viewpoints potentially relevant to the theme at hand.

During the course of the discussion, people can examine the board together. The board is organized in two views: as its inherent network structure and as a list. The former view will show the units laid out in a "map" with links between connected units. Units can be connected if one references the other, for example. The latter view will show the units as a list so it is easier to read the content. This can be especially helpful if the linkages between units are not as relevant. Units are also hierarchical. By clicking on a unit, you can create a list or a network of children units (subunits). Thus, there are many ways to organize and view information. Anyone can add content to any unit.

Beams

A beam, which can be added by anyone to any unit, is a highlighted unit. It can be used to suggest topics that the large group should consider more. For example, someone could raise a call to focus on "Renewable Energies". It may also be used to highlight content of special interest, maybe developments in recent news. People can suggest organizational changes, such as moving a unit elsewhere. Because beams are given a higher status of importance than normal units, their use should be motivated by a reason. For example, "we haven't talked enough about renewable energies, but we should because it has recently shown great promise". We hope they will further a large group's ability to meta-cognate.

Phases and Stakeholders

By default, anyone within the large group can add subunits to some unit. For example, maybe you want to add your thoughts on the unit of "Nuclear Plants" and see what others have said. However, over time, the unit, i.e. "Nuclear Plants", may become overcrowded with so many contributions. Therefore, we propose a peer-moderation system to allow users to maintain the quality and readability of their units. People who are particularly interested in the upkeep of "Nuclear Plants" unit or some other unit can become that unit's stakeholder. Anyone can be a stakeholder for a unit (at least for now). Stakeholders help organize that specific unit's structure.

A unit undergoes a cycle that is based on the iterative thinking, where ideas are first generated, such as through a brainstorm, and then are tested or reflected in greater depth. In the generative phase, anyone can add subunits and links to a unit. In the reflective phase, stakeholders meet to reflect upon the content of the unit and can combine similar subunits, remove redundant subunits, and create or destroy links between subunits. They can also choose to move the unit, perhaps under a new category, and perform other moderative duties, such as removing clearly undesired content. A board or unit may require a minimum number out of the total number of stakeholders to be present to make changes. While the unit is in the reflective phase, others may be temporarily disallowed from adding to it.

In the future, people may be able to extend upon this iterative thinking process, such as by adding a finalizing phase or other phases.

We plan to test key aspects of this design through user testing in the coming weeks/months.

Floating Ideas

- We are also thinking of creating a hyperbolic view to allow a user to explore the entire board and all its units at once in a focused way. This view should be able to show connections between subunits of different units, for example.
- Have optional flairs to note what a unit is, i.e. topic, support, idea, etc.
- People should be able to search for some keyword through the board, within a unit, etc.
- People should be able to bookmark units of interest.
- In order for a group of users can pin a unit/subunit to show that they are currently discussing it. This can help members quickly get to the next unit topic and can be used to allow people outside the discussion to join.
- Allow units to be polls to help people make decisions quickly, such as what topic to focus on next as a large group.

1

Weekly Retrospective: Jan 11 - 17

Last week:

▣ I explored the use of emoji reactions in terms of their utility and some of the more theoretical issues that could arise

@sydney and I defined a plan to test the implications of using reactions in a conversational space

▣ Sydney and I worked on the design of the global discussion space

This week saw a continuation of research and planning. Development on the platform has slowed a bit until we have a better idea of the direction we'd like to go and what to prioritize.

This week:

□ Finalize experimental design for testing reactions

□ Continue to flesh out global discussion space design

□ Create a blog post summarizing the latest state of the global discussion space design

□ Create a short video of the same ^

Here’s the latest thinking on the goal for the first experiment on reactions after lengthy discussion with @sydney. We want to figure out if there’s a significant effect of different kinds of reactions—symbolic vs. text-based—on people’s perception of a post’s content.

Within the experiment, there are four groups, each with a set of screenshots of conversation spaces with posts in these contexts:
1. Just the post (control)
2. The post with text reactions in response (simple responses such as lol, hah, yes, sure, etc.)
3. The post with emoji reactions in response (the most basic emoji counterparts to the text reactions)
4. The post with emoji reactions attached to the post (like Discord/Slack do it)

In each category the task is to rate different dimensions of a specific posts’ qualities on a scale. Each category provides a different kind of reaction style context that may or may not have an effect.

Some things to sort out:
- What are the ramifications of using these different approaches to reactions? It might be good to hypothesize outcomes and the resulting potential interface/system designs.
- What are the different dimensions being rated? In other words, what are the judgements we are asking people to make about the posts?
- How do different kinds of emotes have an effect? Does the 👍 have a significantly different effect than the ✅? This is probably best expanded into its own experiment.

Now to get a bit more theoretical on the emoji reaction topic.

There's a chapter in Cosmicomics by Italo Calvino that I think is quite salient. The interpretation most relevant here is that we make marks and those marks have a unique quality that is distinguishable from their meaning. The meaning of a mark is fundamentally distinct from the mark itself. While this might be the case, I tend to disagree with this sentiment in the functional sense; most marks that we make do have some symbolic or archetypal significance through our interpretation of them in a specific and understood context. An example might be the cross symbol (✝): it's a mark deeply tied to a meaning in Christian religion and the symbol itself invokes a meaning. However, the main problem seems valid; there are plenty of instances of disagreement about the meaning of a mark probably because there is no inherent, contained meaning in the mark itself. The swastika (卐) is a very good example of this; it's a symbol deeply rooted in a number of religious practices including Hinduism and Buddhism but is also perhaps the most defining symbol of the Nazis. Use in different contexts means the mark takes very different meanings because there is not one meaning inherent to the mark itself.

In the second point in my last post, I explore this idea a bit in relation to emoji reactions. Intent is a more specific way of saying meaning in the context of a conversational space. So the intent of a reaction needs to be understood in order for the reaction to have a productive and expressive value to others. Perhaps the :) emoji can be understood in many different ways in different contexts. A :) can be genuine, sarcastic, or meant to infuriate. The meaning of the symbol can be used in a way that's more meta; meanings are contrasted against each other to create more complex interpreted meanings. This is the domain of the meme.

I tend to think that this is the best and worst aspect to reactions. On one hand it allows meanings to be used in increasingly complex ways to get around or subvert the intended use of an emoji. Seeing the 🤣 emoji used in response to nearly any statement that's not serious means the symbol looses it's value in conversation. Seeing it used in an unexpected and sarcastic way can give the emoji it's own quality and meaning that's beyond the face value of 🤣. On the other hand it can erode others' understanding of the symbol and seed confusion or unnecessary clutter into a conversation (why would they react with 🐔??).

Reducing the number of possible emojis that can be used to react (iMessages only gives six reactions, Facebook and others are similar) is one way that others have tried to sidestep this problem. Only choosing reactions that have somewhat more well-agreed upon meanings means that you're never even given the opportunity to react to my message with 🐔, but have to go with a generic ❤️. Too bad. The layered, expressive quality to these symbols is removed in favor of clarity. But is reducing the reaction set down to just six possible options really the best way to accomplish this goal? I don't think it is, and I don't think that that Apple really thought it was either, which is why we have the absolute abomination that is Memoji. Surely there is a better way to design reactions that are more expressive without needing to resort to making my facial expressions puppeteer a cartoon emoji face.

There are two design goals I think are worth working towards:

1. Make reactions expressive. They should be able to be used in many possible ways so the conversational space is kept rich and layered. A good test of expressiveness is if reactions are meme-able.
2. Make reactions productive. A reaction should be able to serve a purpose. This is the primary reason they've been implemented in so many systems. They should be able to replace certain phrases or words that hold little value on their own other than as a marker to indicate to others that you've read something and perhaps agree with it.

I don't think these two goals are in as much tension as may initially appear. Reactions are highly contextual, so the same affordance can be used in both serious and non-serious ways. The primary challenge is to design the system such that sitting between these two modes doesn't result in a less than ideal experience.

Returning to my research on emoji reactions, there's a few points that I think are important to take from the affordance.

First is utility. Reactions are for cleaning up a chat and eliminating the need for short replies that convey very simple ideas. Replies like "lol" or "haha" or "what?" or "woah" or "nice" and so on (designers at Facebook identified the most common one-word responses in their process of designing reactions). In theory, all of these can be conveyed using an emoji (more on that in the next section). Attaching the emoji to the message also removes the need to quote, and aggregating emoji counts from all members of the group can make a message a kind of poll very easily. The one piece of functionality serves a number of very utilitarian purposes in quite a clean and efficient way.

The second idea is that reactions have, in the view of these companies, relatively agreed upon meanings. Since they have an effect on future message content, interpretation of reactions becomes essential. The 👍 should have a similar meaning to me as it does to the others who use it. Otherwise there's a mismatch between actual and perceived intent. Of course 👍 can mean different things in different contexts, but generally the intent should be well understood by others. If this isn't the case, then 👍 can't reliably be used as a replacement for "sure", "nice", "ok", "yes", and so on. It's a bit of a clash then; emojis can be as richly interperable as any other kind of language, but as a reaction they are designed to simplify and stand in for much more specific kinds of responses. For the most part that tension is resolved through the assumption that 👍 (and every other emoji used in reactions) is clear enough for the majority of cases.

Finally, there's something else that reactions provide, a kind of assurance that what you've said has been seen and interpreted. This is the non-verbal aspect to the affordance. It's a mark, a symbol that can be used to ensure you and others are on the same page, or to gauge how others are interpreting your messages. Or perhaps just that they've been seen. Reactions have another sort of utility in this regard.

Weekly Retrospective: Jan 4 - 10

Last week:

▣ I worked on re implementing the discussion join logic using Svelte

▣ I laid out a testing plan for the spring

@sydney and I worked on designing the global discussion space

▣ Sydney and I developed experimental methodologies for testing conversational mechanics and testing the implications of using different kinds of metrics in the global discussion space

▣ Sydney summarized the latest state of the global discussion space design in a blog post

Last week was mostly focused on experimental design and system design. This coming week will largely be the same.

This week:

□ Codify experimental methodologies and design for both conversational mechanics and metrics studies

□ Re implement the chat interface in the Svelte rewrite of the client

1

The first question to address: how does people's use of specific chat affordances effect others' perception of them?

In other words, knowing nothing about someone else except seeing how they converse through the interface, how do people form judgements about others?

Of course the content will play a role. So they key is to separate that as a confounding variable in the experimentation.

The overall goal here is to find a way to create a conversational space that allows people to be expressive and clear without significantly warping others' perception of them as a result of the affordances they choose to use. It's our job to figure out what affordances to provide in order to create this space. So the question to start with is if there are affordances that do have this effect, then work backwards to find a more essential version without the effect.

Here's the latest blog post: https://why.pith.is/posts/meta-cognition-1. :)

Currently, I am designing how small groups or individual discussers can organize their ideas as an large group. Our main idea is the reflections of the large group are captured in a large, tree document. However, this presents several issues.

Issues

1) How will small groups form in the first place?

2) What if there are a lot of people all vying to edit the same piece of the document?

3) How will people within a small group agree on what edit should be made to the large document?

4) What if the growth of the large document gets out of hand, to the point it is no longer sensible? If ideas are all over the place, this could increase the difficulty of identifying what are the ideas and therefore what needs to be discussed further.

5) What does the document represent? The content people have decided on or their thoughts on continuing the discussion, their various perspectives and concerns?

Potential Solutions

We believe the following traits could address each of the above issues respectively:

1) People can form a chat on a particular unit. At this point, we may make it one chat, but a stretch goal would be to allow the creation of more than one chat if there are a lot of people to break them into more manageable groups.

2) Edits (including additions and removals) are proposed to the large document but are not immediately in effect. They can be batched together as an "amendment". Amendments also have an attached chat through which people can discuss. An amendment must be approved before going through. We will have to consider how people will view amendments and have the opportunity to get them approved.

3) Amendments are made by individuals, not small groups. A small group can mediate how to agree on an amendment. One stretch idea is to allow the creation of polls for various changes to the large document.

4) Generally speaking, if there is more activity in a section of the large document, it should be more difficult to make a change. The idea is that more of the people involved in a section should consider amendments to prevent overlaps of ideas or bad organization. Generally speaking, the platform will keep track of how many amendments were proposed in the past X duration of time. Based on this, there must be more approval for each one. Approval could be by number or percentage.

5) The document will most likely be a combination of content and thoughts. The system should help a group perform what I am calling "group meta-cognition". This is the thought-process of a group in reflecting on how it should continue its actions: its discussion and how to organize its discussion. To do this, we have two metrics that people can influence for each unit in the large document: priority and explorability/stability. Priority indicates how important people believe a unit is. It has three weights: weak, medium, strong. People can create new topic-units with some indicated priority to spur new areas of discussion. The final priority is determined through a weighted vote of all those who care to vote upon it. If people believe a unit is more explorable, they believe there is still much worth to discuss further. Conversely, if people believe a unit is more stable, they believe the unit should not be changed much—this could mean they believe discussion on a unit (maybe a topic or subtopic) is finished. This metric is also on a three-point scale that is determined by a weighted average of those who choose to vote: explorable, neutral, stable. Perhaps the system can also recommend units as being explorable by comparing its priority to its normalized activity—if the unit has high priority but low activity considering the number of people involved or edits made in total, it should probably be explored more. This normalized activity may also be something people can see to decide for themselves how to act. It will be determined by only the system and be presented as on a three-point scale: low, medium, high. We believe each metric should be "finite" to prevent overbearing positive feedback loops, just as having more popularity generates more popularity. People can use these metrics to search for ones of high explicit explorability, for example, or potential explorability as indicated through the priority/normalized activity.

Culture

Some of these mechanics could be determined by culture. For point 4, the group can choose how activity influences needed approval. We think this can help address some of the issues other platforms are facing with regards to how authoritarian/anarchistic management should be.

Further Considerations

- We are especially concerned about point 4 because of concurrency. We hope to manage some of it through socially-mediated mechanics.
- We want to improve the searchability of content—not just the ease in finding a specific kind of content but also in finding potentially interesting content. We hope to broaden what people think about through search metrics, as discussed in point 5.
- Combining the former bullets, we are concerned with how to present amendments in such a way they are given the consideration they deserve. There should be some organization in how they are found, but we hope to help people keep an open-mind about what they look into.
- We imagine most discussion should take place on the units themselves so the large document acts as the "discussion square", and that discussion on amendments is mostly to make sure the amendment is reasonable. We should probably examine this more.
- We focused mostly on the tree-structure. How can we help people network ideas across the tree to encourage more bridging between ideas?
- How will the large document be incorporated in our current model of a small discussion?

I've been mentioning for a while that a big part of the plan for the next few months is evaluation of specific affordances through somewhat more rigorous user research. Here's the plan for that:

January

Codify the non-verbal communication design into three testable interfaces, plus one control. Implement them using the existing backend on four forks of the Pith codebase.
Experimental design.
Get experimental design approval from IRB to go forward with testing.

February

More IRB?
Pre-test and deploy interface variations.
Testing.

March

Review results, create second interface iteration with turn-taking mechanism.
Implement.
Approval from IRB for any changes made to experimental design.
Testing of second iteration.

April

Repeat of March's plan for a third iteration with modified affordances from the results of the first two iterations.

2

Re-implemented all of the join logic and routing using the framework I built last week. It took maybe two hours. Lessons learned from the previous build are seriously paying off.

Something I've been running into when designing the store for this project is that there are a number of different ways that it will need to be used, often pulling the ideal design in different directions. For example, sometimes methods attached to the store will need to be asynchronous, which suggests that the store should execute functions in a defined order such that each call can rely on the last call being complete. The problem is that this approach is needlessly slow and verbose in the majority of the cases.

This morning I came up with a design that tries to accomplish both goals, so that if needed the store's methods can be executed in a particular order, otherwise they execute in the background to keep things quick.

The idea is that when defining a store method, you can opt to accept two additional arguments resolve and reject, which are taken from a promise that the store gets wrapped with. So a store might be built with a method called initialize which explicitly takes a resolve function and calls it when the API call is complete:

{
    initialize: (discussionId, resolve, reject) => {
        return (socket, update) => {
            socket.emit("blah", {id: discussionId }, (res) => {
                update(state => { return { ...state, result: res } }
                resolve()
            }
        }
    },
}

Now, in the component, I can explicitly wait for the store's initialize function to resolve, in this case I expect the API call to be complete when checking the result:

onMount(async () => {
    await discussionJoinStatus.initialize(id);

    if ($discussionJoinStatus.result === "blah") {
            await goto(`/d/${id}/join`);
    }
});

Alternatively, I can call the initialize function and then execute some other code that doesn't depend on the result while we're waiting for the work to complete:

onMount(() => {
    discussionJoinStatus.initialize(id);
    someOtherFunctionWillExecuteNOW()
});

This small change makes it easy to wait for calls that depend on previous data received from the server, but only if actually needed. The promise wrapping is taken care of by the custom store, so when it comes to writing the actual API calls and store modifiers, It's just a matter of opting to use the optional resolve function passed to the store method.

I've been thinking about simple ways of facilitating turn-taking. There are plenty of more inventive approaches to addressing this problem, but as a first step here are three dead simple turn-taking mechanisms to try, least restrictive → most restrictive:

0. No turn-taking mechanism (control).
1. Typing indicators, e.g. "joe is typing". Anyone can send a message at any time.
2. Acknowledgement of previous messages. You can only send a message after first clicking on all previous "unacknowledged" messages.
3. Single speaker typing lock. You can only send a message when nobody else is typing.

I don't think any of these will be the actual solution but I imagine very different dynamics would emerge in the discussion space as a result of each mechanism. Which dynamics end up being beneficial or detrimental is a tougher question that will likely be somewhat subjective.

Finished writing the store I discussed in my last post. Added the ability to define custom methods that attach to it and mutate the state without needing to redefine a derived store for every variation. The whole thing is quite clean; no boilerplate needed. I think I should be able to get pretty far with this solution, we'll see.

Made some real progress in my understanding of how stores work in svelte by writing a couple from scratch.

I wrote a store that's about 70 lines of code and handles all of the connection and queuing work that previously took about 450 lines of code in the Redux version. This is still very much a work in progress, but the goal is to generalize this store in a way that allows for any set of generic methods to be added which handle listening for/emitting specific events on the same socket object. Then I can create a series of smaller stores which are responsible for portions of the state, rather than one monolithic store that rerenders the entire app on every state change.

Put up a new post on the pith blog. Reposting here too.

The Intuition to Avoid Popularity Metrics

One intuition that we've had since the start of the Pith project is that when designing the space, popularity metrics should be avoided. As the project has progressed we've found that a surprising number of common affordances in digital social spaces can be framed as popularity metrics. Plenty of these affordances can be both functional and delightful to use, which has forced the question of why we're really trying to avoid popularity metrics in the first place.

The idea started as an intuition and will remain that throughout this article. I don't want to include academic studies or research under the guise of there being some sort of compelling statistical evidence that popularity metrics are detrimental in some way. In the coming months we'll be testing affordances in discussion spaces that aim to sort this question out more rigorously in the context of our own system. For now though, I do want to elaborate on the intuition in a way that tries to codify some of the ways popularity metrics are used and their implications, from my perspective at least.

There are two main ways that popularity metrics manifest in digital systems:

1. Algorithmically. More popular content is more likely to be shown to someone, or a core part of the system allows for browsing this content. This can mean content that is "engaging", "controversial", "most talked about", "hot", and so on.
2. Functionally. A part of the system that the user interacts with is used to support the algorithmic portion of the system. Examples include "likes", "up/downvotes", "retweets", "shares", and so on.

These two pieces often fit together so seamlessly in systems that they become a core part of how someone interacts with it. Upvotes are one very essential metric used by Reddit to determine what appears in someone's feed. That means people have to upvote content, which Reddit makes easy to do. The upvote becomes tied to Reddit's identity and how you're expected to engage with the platform. The same is true for the "like" button that dominates Facebook and Instagram. As are views on YouTube. Almost every platform that supports social interaction online has some piece of functionality that allows people to give feedback about what they think about something, which is then used in the calculation of if that content should be shown to others.

But why? Why is this such a standard design pattern? I'd say it's probably because it works. People want to see things that are more "engaging" or "interesting"; content that's been verified as being worth their time by others. So if the end goal is to keep someone on your app or website, of course it makes sense to make sure they stay engaged with the juiciest, sickest, most shocking content you've got.

Sure, I want to see what's popular sometimes too. Everyone does. Popularity metrics can be a very effective way of pre-sorting a large quantity of unknown content into a smaller set that's more likely to have qualities that people will appreciate. But therein lies the real problem. Everyone appreciates different things, and you can't know what those things are ahead of time. Even I can't really predict what I'll find interesting sometimes.

The solution to this problem—as decided by most social platforms—is to push all the things someone has "liked" through a series of machine learning models which can then be used to predict if that person will like some new content. This answer is Zuck's favorite: to "build sophisticated AI tools". To be fair to Facebook and the others, of course this is the answer. If the basis of your system is a series of popularity metrics, then you have a lot of data about what every user of your system has liked. So obviously the solution is to build machine learning models. It's that or changing the fundamental way the system works. The choice to go the machine learning route makes a lot of sense.

Again, I think popularity metrics can be an effective way of sorting content. The question is more what the end goal of the system is; why do you need to sort the content, and what is the content? If the goal is to support exploration and productive discussion—our goal—popularity isn't a particularly good way of sorting content. In designing the system, it's a better idea to find the underlying motivation someone has for taking some action, and then to design affordances that allow for them to make that action in the purest sense of what the action is. Maybe there are times when explicitly introducing a popularity metric through something like a poll makes sense, but only insofar as the goal of the poll is to uncover what is and is not popular. If someone wants to know that others have seen their message, the affordance that's designed for this purpose should try to serve this goal and this goal only. A solution such as a "like" button introduces a lot of complexity beyond what the person making the post really wanted in the first place.

So, the intuition to avoid popularity metrics mostly stems from the abundant staleness of existing designs and their familiar, predictable problems. Of course, I haven't talked at all about the psychological impacts of these metrics. But that subject has been explored ad nauseam elsewhere, and I think we have very keen sense for the problems there. Perhaps there's something about social technology and people that the tend towards vapid sensationalism and fetishizing popularity is inevitable. I certainly don't want that to be the truth, so creating and testing alternative approaches seems like a reasonable approach.

Worked on passing styles to the editor component but ran into some limitations of svelte. The workaround is to add a global marker to the styling and assign an id to the component element:

:global(#test > em, #test > strong, #test > code) {
	padding: 2px 5px;
	border-radius: 5px;
	background-color: black;
}

Kinda strange way to do it, but the only other way to get around this is to make the component an action so it can attach to an element defined by the parent.

Spent the day working on refractoring the way the websockets connection from the client to the backend server is handled using svelte stores. I'm learning as I go, which is probably the best way to learn, but means it takes a while.

2

Typing indicators are perhaps the more obvious of the two affordances I'm interested in. I think the primary purpose is to support turn-taking in an environment where we don't have the same sort of cues to understand when someone's done talking or has finished their point.

Of course some of this is linguistic, and there are plenty of models that develop a kind of ruleset to describe how turn-taking works in conversation.

I think it's worth thinking about how to build a robust turn-taking mechanism to support productive discussions. Robust could mean typing indicators, or it could mean something a bit more structured, or even looser. I'm not sure yet. Regardless, once designed this is something that I'd like to do some more formal testing around.

Interestingly, I can't find any of the same sort of descriptions of what "typing indicators" are from platforms that have implemented them in their chats. It's an affordance that's generated by the system without the user needing to do anything, so maybe that's why there's no information about why they exist. Typing indicators have also been around a while—all the way back to MSN Messenger—so it seems likely that they were mostly adopted as a standard without requiring much justification.

1

Emoji reactions, as described by companies that have implemented them.

Slack:

Emoji reactions are a quick way to respond to messages in Slack. They're both fun and helpful for getting work done — a simple reaction can often replace the need for a follow-up message.

Discord:

Sometimes an emoji is worth a thousand words. Instead of typing out your response to the age old coffee vs. tea debate, you can react to the post and let people know what's up:

FB Messenger:

Message Reactions are the ability to react to an individual message with a specific emotion, quickly showing acknowledgement or expressing how you feel in a lightweight way.

Facebook (maybe the first one?)

Today we’re launching a pilot test of Reactions — a more expressive Like button.

As you can see, it’s not a “dislike” button, though we hope it addresses the spirit of this request more broadly. We studied which comments and reactions are most commonly and universally expressed across Facebook, then worked to design an experience around them that was elegant and fun. Starting today Ireland and Spain can start loving, wow-ing, or expressing sympathy to posts on Facebook by hovering or long-pressing the Like button wherever they see it. We’ll use the feedback from this to improve the feature and hope to roll it out to everyone soon.

Signal:

When you’re standing next to a friend who says something funny, you can just laugh. You don’t need to pause and say “I found what you just said humorous” or quote their own words back to them before displaying a real-world emoji on your face. This feeling of immediacy and effortless response is what reactions are all about.

Github

Every day, thousands of people are having conversations on GitHub around code, design, bugs, and new ideas. Sometimes there are complex and nuanced points to be made, but other times you just want to :+1: someone else’s comment. We’re adding Reactions to conversations today to help people express their feelings more simply and effectively.

Stack Overflow:

We’ve heard from our users that the inability to say “thank you” is frustrating—especially for new users who don’t have enough reputation to upvote or comment. Even when users gain these privileges, they still want to say “thanks.” . . . Based on this data and user research, we’ve decided to test a simple, clutter-free way to say thanks—a reaction button on answers across Stack Overflow.

1

There's lots to consider when it comes to non-verbal communication for the chat interface. There are two primary affordances I'm interested in that many messaging applications have implemented:

- Typing indicators/ellipses
- Emoji reactions

The goal is to drill down to what's essential here; what are the primary drivers that resulted in these design choices? What is it about people and their communication that makes these non-verbal cues effective?

My intuition is that the immediate answer is really obvious and the real answer is not really understandable. Though perhaps in both cases the implications are the same, we shall see.

5

Hello Futurelanders, this is Sydney, the other, ghostly half of Pith. Today we discussed how we are going to start working on the second part of this project. It looks like we have three weeks in January to expend to this endeavor. So we considered doing the following for the three weeks:
(1) [@sydney] design the beta, how the large discussion interacts with the many, small ones / [@christian] flesh out the designs for the conversational mechanics, i.e. the nonverbal communication
(2) Continue (1) and start doing (3)
(3) Use Svelte to rebuild the frontend for the Alpha in consideration of the expected designs.
Yay, so that's that. Ghostly muah ~.~. Goodbye!

1

An alternative approach where each change made to the document is immediately reflected in the local version of state. If the change is rejected by the server we remove it. This minimizes the number of moving parts significantly.

Goal for the remainder of the month is get the document rebuilt in svelte using a better designed architecture to handle the concurrency issues we ran into before. We figured starting with the most difficult part will be good to ensure the rewrite is yielding some kind of progress. New versions will be released and evaluated much more frequently.

In parallel @sydney will begin the design of the next phase: the between-discussion networked space.

Started writing a test suite for the editor component. Yet another thing we didn't do before but probably should have done.

Worked on fixing a couple of issues with the editor and then published it as a package on npm.

I'm gonna try doing this with more of the components of the interface so I force myself to keep them as separable as possible. It'll also make it easier to reuse them in other interfaces later.

A nice little affordance that you don't usually get in online editors but do get in text editors like VS Code, Sublime, and Atom. Highlighting text then pressing "*" or "_" inserts those characters around the selection. Sort of like command + i or command + b, but with the markdown symbol itself.

It was pretty difficult to implement, mainly because the range has to be reset after changing the underlying HTML, which means keeping track of the selected range before and after making the change, then reconciling the difference. There are still some things to fix.

2

Nice that you can super easily emit events up to the parent component like this. I think maybe I'll write a variation of this to limit the number of updates each editor sends out to once every couple seconds.

There is also the context API in Svelte that I have yet to really dig into but seems perhaps relevant here.

8

My first little experiment with Svelte. A nice simple text editor is one of the first things I started working on the React version of Pith, so I figured it'd be good to start with it here too. I have to say I am absolutely loving the simplicity of Svelte so far.

Using this framework will also force me to write every component myself rather than relying on premade ones as I sometimes did before. I feel like I'm building a house from scratch, which is super satisfying.

Really productive design session today. We were able to re-contextualize the alpha in a much broader, more global view. Basically the gist is that a unit is the most atomic part of the system, and can contain child units, chats, and so on. The overall structure is a recursive list of units.

We are also playing around with adding a third dimension, that is, the exploreability/solidity dichotomy. As branches of a discussion are expanded out, they become more solidified and thus are more visible at a coarse-grained resolution. Less explored directions are visible only at a higher resolution. Discussion branches out in a hierarchy of multiple dimensions, both in terms of topic/category, and potential for future exploration. I like this direction in that besides solving a number of key questions we've been asking, it balances the conceptual/future state with the concrete/present state.

Lots of specifics to continue to hammer out to see what implications this design has for large 500+ person groups and small, more intimate 2-5 person ones. But the key is that on paper at least the same structure should apply to both.

2

@sydney and I discussed the new direction extensively today and reorganized the project timeline. First on the docket is alpha+ (0.1.0+), a rewrite of the alpha client to be far simpler.

Our main focus was finding ways to pair down the complexity of the client-side code for the project; a lot of the issues that arose were from the client becoming over-complicated and much too smart for its own good. I think the degree to which frameworks like React are abstracted away from the DOM results in code that can be much bulkier than needed.

Anyways, we're thinking about ways of keeping the interface and code for rendering the interface as simple as possible. Seriously looking into Svelte and Sapper as an alternative approach. Being closer to the actual DOM and thinking a bit more template-y could be useful here. Also would rid us of nastyness that is Redux. Plus, feeling that React and friends are a bit fat and we're trying to keep Pith as essential and slim as possible.

This is totally new territory for me; I've only ever used React and plain HTML/JS. This project is a great opportunity to learn a different approach that doesn't feed into the JS bloat that is taking over every site on the internet. I really don't want 5s load times for massive JS modules just to download to render a pretty basic page. Plus, we're at a crossroads in this project and continuing down the same path would mean probably not doing something different for a long, long time. Might as well start now.

3

New post is up on the Pith Blog on the most recent test. Reposting it here because it's relevant.

v0.0.1a Post Mortem

We tested the latest version of Pith today in a non-development context. To go along with this we've tagged the latest version of the code so we have a record of its state at each of these evaluation points. After each release we'll also be writing some kind of a post mortem to go over all the technical, design, and conceptual issues that arose.

The goal over the past two months or so was to get Pith to a place where we could explore how each of its parts interacted in a discussion environment. Pith takes a familiar chat system (like Discord, Slack, iMessages, etc) and pairs it with a recursive list structure (like Notion, Workflowly, Roam Research). We arrived at a design where these two normally discrete systems are placed within the same virtual environment such that discussion takes multiple forms, from fleeting and ephemeral to preserved in as natural a way as possible. There are additional layers of deep integration between the two systems in Pith, such as transclusions which allows for any chat or document unit to reference another, and backlinks which show all units that transclude a particular unit.

We developed a system from the ground up to implement this design. While the "proper" way to go about developing a system like Pith might have been to create a series of incremental prototypes that verify parts of core functionality, we set a goal of having a minimal version of all functionality implemented and worked towards it. While our approach was risky, it seemed necessary since all the individual parts of Pith had been verified as usable, valuable components on their own. We even performed our own user testing of a prototype of the chat system in the summer and were encouraged by the results.

Returning to the present, we have mostly achieved the goal we set out at the beginning of October of implementing a minimal version of all the core functionality in the system. There's a chat, document, transclusions, and searching. It's a multi-user environment and people can contribute to each part of the system at the same time. Yet it doesn't really work.

When it comes to writing code, I am very much a hacker and will brute-force my way through to implement what needs to be implemented. I come to programming from the perspective of a designer prototyping a design; I want to get the thing to work to the point it can be played with so I can understand what it's like to use it.

The problem with Pith is that it's extremely complicated. Getting to the point of being able to experience it means designing and implementing some very complicated sub-systems that aren't visible to the user but absolutely dictate how it feels to use. @sydney and I worked on designing a system that we thought would work best given the performance, implementation, and user experience constraints that we set for ourselves or had to work with. Unfortunately, our inexperience designing these sorts of complicated systems and our very high standards resulted in a complex and fragile system. Developing the backend to a point of stability took longer that expected, as did designing robust interface between it and the frontend. This meant less time for testing and an overall less than optimal system design. As a result, we have decided the document sub-system is unsustainable to continue to develop and it will likely require a deep and complete redesign.

Pith has always been an informal project for me and Sydney. However, a series of changes have meant we've recently been looking at it with a more serious perspective. The new introduction of funding for infrastructure and testing costs from two external organizations has also played a role in pushing us to take the project in a more formal direction. As a result, we feel somewhat more obligated to do things properly, even if it means completely rewrite a large section of the system.

It wasn't all negative, though. The chat system worked much closer to what we had intended and sets a solid foundation on which plenty of other meaningful functionality can be built. The next version of Pith will include additional experimental affordances in the chat; specifically our conception of virtual non-verbal communication. Lots more to come regarding this part of the system as well.

I've certainly found Pith to be the most difficult and engaging project I've ever worked on. The deep linkage between system design and user experience is an extremely challenging space to be working in. There's many more directions we're planning on pushing the project, many of which now have even more potential with our recent external support. We'll be getting something out there soon enough.

Did a big test of the system today and there were some very core issues that arose. Technically it works but so very far from an ideal model. Will be redesigning parts of the system in coming days.

I think there's one final big thing to get taken care of before doing an initial (extremely experimental) deploy of the system. Maybe a couple of small bugs here and there but those should be quick to squash. Hope to have this done by the end of the week but other stuff is ramping up too so we shall see.

Working on a homepage. Composing a page like this is incredibly fun; I take the same few elements and arrange them in a ton of different layout variations. I like the freedom and simplicity of it.

1

Added the ability to reply to units in the chat and made this little hover animation to make it a bit more obvious which unit you're replying to or recording to the doc.

As soon as I finished writing my last post I thought of a different way to handle the issue of reconciling newly created units with temporary ones. Rather than removing the temporary unit and copying its content to the new unit, I just keep the temporary unit and then dispatch any changes to it as if it were from the real unit by referencing the real unit's id rather than the temporary id.

It was pretty annoying and took two hours to write, but it works much more smoothly now and no content is lost, nor is is it prohibitively laggy.

Fixed a bug that prevented your cursor from moving to a newly created unit right away. The interface would lag as it waited for confirmation from the server that the new unit existed, which meant if you typed immediately after pressing return the typed characters would disappear.

Fixed this by creating a temporary unit that can take any new input, which is then copied over to the new unit when the real unit is given by the server. I think this solution is okay but there's still some noticeable lag when the temporary content is moved over to the new unit. Will continue to think about ways around this problem.

Finished searching in the document today. Worked pretty much all of yesterday and today on a final sprint to get everything for the alpha version implemented by this afternoon, which we did successfully. Now it's just a matter of going through and fixing a bunch of remaining bugs before we can move on to the next big design phase.

Finished networking the document. Returning, deleting, editing, tabbing, and shift-tabbing units syncs your edits across all clients.

Still some cleanup to do with ensuring focus moves where it should after an action, but overall the main functionality is there. Now to tackle networking drag and drop…

Added a unit lock on units that others are editing at the same time. Also added an indicator underneath the focused unit to show who's editing it.

Editing document units is now networked. While editing, the most recent changes are broadcast to everyone every three seconds. So if you type very slowly, or take a lot of time to formulate a pith, others will have some feedback.

I'd like to see how often I can get this to update without it bogging down the server and other clients. I don't think it will get to near real-time (even Google Docs has a noticeable delay), but perhaps something that feels a bit more dynamic than every three seconds would be nice.

Added a scroll lock on the chat so when you're not at the bottom it doesn't shift the chat up making you loose your place. An indicator is added to alert you to new posts.

When you're at the bottom, it automatically shifts up with new posts.

Fixed the chat so making a post doesn't make it jump up. The text you just posted is shown in light grey before the server responds with confirmation that it's been posted.

Made it so search results can be added to the message you're writing. It replaces the query you typed with the reference to the unit you choose.

Adjusted the way references look as well.

Also worked quite a bit on setting focus correctly in the text editor for a variety of edge cases. Still a bit more to do but nearly there I think.

Part of what has made developing the document so difficult is having multiple states for each unit. They can either be rendered statically (debugged in red here) or editable. Shifting focus around the document in a natural way means the editing state must be set by the component in response to the user hitting the return or delete key.

I think I'm getting there. I sorted out the shifting focus and rendered state, plus fixed a bunch of issues with creating new units when pressing "return". One bit of functionality that seems really simple but was an absolute pain to develop is when you hit "delete" at the beginning of a unit with content in it. In this case the content is appended to the unit above it and the cursor is placed at the point where the old content joins the new. Very happy with how that turned out; it feels very natural.

When you discover bugs while recording the screencap lol.

I've been working on getting the document dragging/movement nailed down today. Still some things to iron out but overall working well.

As soon as I started dragging units around the document I wanted to be able to drag a unit from the chat over to the document. I think a lot of stuff like this will continue to emerge as more functionality is added and we get to start playing with it.

2

A good nine hour (on and off) work session on Pith today. Most of it was spent refactoring the existing codebase to work with the new requests model. There were a ton of issues that we ran into so @sydney and I ended up switching back and forth as one of us fixed something so the other could work, and so on.

Much more of the functionality is in now, including joining discussions, adding posts, moving posts to the document, and opening units in the document. The next challenge is ensuring document editing works locally. After that it will be time to implement the networked document editing model, which will likely be quite tricky. Hopefully all the work we've put into designing the request model will pay off and there won't be any huge obstacles.

Worked with @sydney to design a sensible request queuing system for the frontend. We started out developing using a model where each request that's pending is marked as pending in the state, which led to a lot of complication and replication of boilerplate code in the redux actions and reducers.

We decided to redesign the entire request model so each request is identified by a unique id generated by the requesting component. This id is placed in a section of state when the request is resolved by the backend, so the component can then adopt the new state. This is exactly the sort of model needed to implement the more complicated state-swapping functionality I described earlier in the week, plus it cleans up the actions and reducers nicely.

The idea going into building the alpha was to get something done quickly, but our tendency to overengineer and ensure everything is built as elegantly as possible has resulted in it taking much longer than originally intended. Hopefully this extra time will pay off in the future with fewer bugs and scaling issues.

Worked for about six hours today rebuilding the document component to account for how data will need to be modified. Doing this correctly is super important as it hopefully will prevent a host of concurrency related bugs.

The difficult part of this is how updates made to the document by a user should be immediately visibly reflected in the user's local version interface, then invisibly update according to the ground-truth version of the document held on the server.

The idea is to have the document render the most up to date version of the data - up until the user makes some action, such as editing or moving a unit. At this point , a copy of the state is made *, an event is dispatched to the server , and the modified copy of the data is rendered =. Once the server responds with the new version of the data, the copy is switched out for the live data -.

------*↑====↓------

Figuring out the mechanics of implementing this across a number of components is challenging, but I'm making progress. This is an interesting area where the desired user experience directly informs how best to implement a particular part of the application. There are much easier ways to update data, but doing it this way will result in the most natural experience, which is by far the most important part.

Added two different states for a unit: display and editing. The display state shows the citation icons and locks the content, while the edit state shows the underlying citation representation and adds spellcheck.

These two states will make editing easier and help us to (hopefully) resolve any concurrency related issues by "locking" units that are being edited by someone else.

Set up generating new child units by hitting return in an existing unit. Focusing the new unit programmatically so the cursor in the right spot has been and will continue to a real pain to get working.

rainflame/pith

The client and server for the Pith discussion project. - rainflame/pith

Finally finished this recursive function that translates a cursor position from a child element to one of its arbitrarily distantly related great-great-…-grandparents some n steps up the DOM tree. For example, taking an index (denoted with #) from some child node:

this# is 

And translating that index to be relative to some higher level element:

<b>I <i>think</i> <u>that this# is really</u> fun.</b>

That same index starting index of 4 is now 30 relative to the whole string.

It was an absolute pain to debug and required lots of patient help from @sydney to design.

Worked on the temporary intro screen for the alpha version of Pith. We're going to be setting up a small testing server so people other than ourselves can create small-scale discussions. This page will serve as the way for people to make those new discussions and have a list (on the right) of all the discussions they've generated and joined.

At this stage—and maybe forever—we don't see having an account as important. So links to joined discussions will be stored in local storage and displayed on this page, and sent via email as a backup.

1

Worked on the flow for joining a discussion as a new user. In the case your nickname already exists, you get a little error like this.

1

Starting to work on routing in preparation for hooking everything up to the API. Visiting a discussion page for the first time will automatically route you to the join page, after which you'll enter the discussion.

Added a grab handle on the left side of a unit in the document. Also made units in the document editable. Had to add the handle because clicking with the intention of dragging and clicking with the intention of editing are pretty much impossible to tell apart.

Not super happy with how the grab handle looks right now; will continue to mess with it a bit.

3

Worked on the search results page. It's triggered when you type ">" and the content afterwards is taken as the query.

3

Finished up the drag and drop functionality. Lots of cases to account for, but in the end I'm happy with the result. Feels pretty solid to use.

Got a good 6 hour working session in tonight and managed to get most of the way through implementing drag and drop on the document. I made it a bit harder for myself by building it from scratch but I think it'll pay off in the end by being much faster and more lightweight than the prebuilt solutions.

Found out about a project called ZOG that was an early pre-internet hypertext system. Information is placed on "frames" that are linked together into a large local network. What's interesting about these old systems it that they're really minimal, far more so than any kind of information organization system today. They had to be designed without any of the complex control systems that modern computers have, yet still be usable by the non-expert in a time before the familiarity of the now ubiquitous PC desktop interface. In this case, ZOG was put into use on an aircraft carrier and used as a kind of local network where people could lookup information and leave "mail" for others. Many of the design choices made by the creators of this system can directly inform how Pith's document system works in terms of balancing simplicity and richness.

Reading these old papers is also fantastic, there's some really beautiful artifacts out there.

Another hover effect, this time on referenced units. Hovering a link draws attention to the correct reference preview above.

Implemented this little tooltip a couple of places around the interface. Hovering a section of the timeline shows it with the name of the page and when it was visited. Clicking will take you to that page.

First pass at implementing the interface layout is pretty much complete.

Currently it’s rendering a bunch of test data, so now I need to hook everything up to the API and do all the Redux stuff. I'll come back and make more UI changes once I've implemented some of the basic functionality.

Considering different timescale/clamp combinations for the timeline. Each timespan is converted to seconds or milliseconds, clamped, then either kept linear or put through a natural log function. It results in timelines that represent the same durations in very different ways.

2

Worked on including references to existing units in the chat, and displaying them above the referencing unit.

Lots more to fine tune here with spacing, colors, etc etc. Just getting the rough rendering working now.

Yesterday worked on formatting messages in the chat. Author and time are shown for "clusters" of messages, a bit like Discord does.

1

Worked on making a super minimal text entry box for the chat. It uses cmd/ctrl + i/b/u to set different text styles. So many of the editors available for react are totally overkill and 2+ MB, which is insane, so I made my own. This just uses the contenteditable property of a div with some html sanitization.

Working on the responsive layout with CSS grid. Two panels that display side-by-side with enough width, and collapse to one switchable panel on narrower screens. Doing it properly this time.

1

First post on Futureland for this project. It's composed of two parts—fancy IRC and recursive list structure—designed to intermingle and create a rich conversation space.

00:00
christian
christian Pith