Lessons Learned: Scientists, Distributed Teams, and Groupware

Executive Summary
This lessons learned document generalizes the challenges facing a specific distributed team of scientists on a specific project. Details about the project and the team are withheld in order to tease out characteristics which may be applicable to other teams at other times. The challenges included in this analysis include both management issues as well as the attempt to mitigate these issues using technological solutions. Recommendations are made to ameliorate some of the most prominent obstacles to productive team behavior.

The team suffered from communication issues, trust issues, and lack of a consensus gathering infrastructure. The funding agency escalated the scope of the project beyond what was originally proposed by forcing two related projects to merge. Unproductive traits destructive to teamwork increased with the escalation of scope. The intensified unproductive traits were not balanced by any mitigating factors which scaled in a similar manner. Traditional management methods (face-to-face, audio teleconference, and video teleconference meetings) were applied, but failed to provide effective project communications, consensus gathering, and decision-making support.

This report compares the current team with a "typical" open source software development project with respect to a number of factors deemed important. Both the team of scientists and a "typical" open source team are composed of geographically distributed subteams, employed by competing organizations. Successful open source teams are found to cooperate effectively because they have a well defined consensus gathering and decision making process, a means of accumulating project experience, a means of publishing a single current "vision" to all project members, and because individual contributions to the project are considered to be owned by the project instead of the individual. This contrasts with the scientists' lack of an effective consensus gathering and decision making mechanism, no method to accumulate project-related decisions, no method to publish a single "vision" to the team and reluctance to make individual contributions generally available to the group.

The identified differences between the struggling team of scientists and successful open source development teams are used to make recommendations. All recommendations concern the provision of project resources made necessary because of the increased project scale and complexity. These recommendations are intended to counterbalance and scale roughly in proportion to the unproductive tendencies of a team of scientists. Recommendations are as follows:


 * 1) Formalize a robust consensus gathering and decision making process at the start of the project. The recommendation of this report is to adopt the proven effective use of voting on email lists, as implemented by the Apache Software Foundation. (This is fully implementable with off-the-shelf software.)
 * 2) Provide a "project center" which accumulates all materials related to the project:
 * 3) includes at least a document library for project files, editable web pages, and calendaring software (Implementable with off the shelf software)
 * 4) if an email list is used for the project, these archives would preferably be searchable and browsable (May require some development to integrate fully with the rest of the project center.)
 * 5) is maintained by the community and should reflect the accumulated experience on the project to date.
 * 6) Enhance the protections afforded to scientists' files, placing access control in the hands of the file's owner. (May require some development.)

Background
Characteristics of the situation in this case study may be broken down into two categories: team characteristics and project characteristics. As all science is driven by funding, the funding agency helped create this situation by compelling two smaller teams to combine their proposals into one bigger effort. However, for the purposes of this analysis, the funding agency is considered to be an external actor whose behavior cannot be influenced. The scope of this effort is to describe and analyze the group of scientists and their actions.

Project Characteristics
The final version of the project has the following characteristics:


 * 1) Composed of smaller proposals with similar experimental designs.
 * 2) The component proposals formed organically by individuals and subteams volunteering to work together.
 * 3) Experimental campaigns, being the major expense, motivate the funding agency to merge proposals with compatible experimental designs.
 * 4) The larger project is more complex to manage:
 * 5) * more specialities are involved
 * 6) * more subteams are involved
 * 7) * the final team is more geographically dispersed
 * 8) * remnants of the management structure from the original component proposals obscure the chain of command

Team Characteristics
The team has these characteristics:


 * 1) Two smaller teams with predefined management structures are incompletely merged.
 * 2) There is no decision making structure.
 * 3) All named investigators have a direct accountability to the funding organization, with no enforceable accountability to a single entity within the project.
 * 4) Co-located subteams represent a collection of domain experts in a specialization.
 * 5) Subteams are complementary in that there is good separation of specializations without much redundancy.
 * 6) Planning and executing experiements is the bulk of the team's interaction.
 * 7) Data analysis is performed in isolation by the co-located subteams.
 * 8) Subteams perceive each other as competitors:
 * 9) * Each subteam wants to be the first to publish
 * 10) * Recalcitrant subteams are reluctant to share data with other subteams until it has been published

A narrative
This lessons learned document concerns the intersection of collaboration technology with the above-described team. A brief history of this team's interaction with technological aids is now given. While there is a component of actual interaction with technology, an ongoing theme is the misuse of technical terms in such a manner that it masks important differences between two courses of action, obscuring the need to make decisions about how time and money should be spent.

At the start of year two of the three year project, the funding agency noticed that the team produced a prodigious number of relatively small data files and offered additional funding to help the team manage their resources. Funding, of course, was contingent upon the team presenting a tractable design and implementation plan. From the start, there was a lack of uniform expectations among the members of the team. Some scientists essentially asked for an FTP site they could all access, but others wanted to be able to search for data files and publications by a team-defined set of parameters. Some scientists viewed the resource as a means of exchanging data while the project was being worked on, others viewed the data sharing site as a means of assembling an archive of data collected by these activities. Team management produced a whitepaper describing a system which could accept, categorize, and search for arbitrary files, be they data files from instruments, spreadsheets, or word processor documents. A strawman design for the parameters used to search for files was presented, with the understanding that the parameter definitions would need more detailed attention later.

In the same whitepaper, there was a proposal to do a literature review of sorts to collect and organize historical information on the same topic as the present activities. The result of this literature review would not be a survey article, as is normally the case. The outcome of the literature review was unspecified, and this is the only reason it intersected at all with collaboration technology. The activity was deceptively named data mining, which immediately sends false signals to anyone who understands the terminology. One possible outcome of this activity involved collecting files from historical investigations (e.g., spreadsheets, data files, scanned PDFs of previous papers) and classifying them with a similar or identical system to the one used for the current investigation. Another possible outcome of this activity was to have humans read the collected papers and extract relevant information into some common and easily comparable form. Both possible outcomes were referred to in abstract terms by the managers as "building a database". Thus, for this second activity, the term adopted by the project for the deliverable had to be accepted as the de-facto activity description, even though it was deceptive; and management failed to distinguish between two potential outcomes with vastly different levels of human labor, involvement, and expense.

The problem of failing to distinguish between two dramatically different activities was compounded when the term "database" also came to be applied to the file management activity mentioned above. Now there were three distinct activities, some of which were separate deliverables in the whitepaper, all appropriating the same term. Worse, "database" was always used in singular form. In discussions, there were not two databases (one for each deliverable), there was one. Managers would propose one set of requirements for one activity, another for the other activity, and it was always unclear to which activity they were referring because the only term used was "the database".

Fixing a misuse of vocabulary is simply a means of willingness to educate and willingness to learn. The more important problem in this case is the failure to make the distinction between activities with important differences. In this case, all parties misused the term "database" in such a way that it masked the need to make decisions about which of two very different activities to pursue. In fact, there was no common usage of the term at all: it had a unique personal meaning to each participant. Lack of a common vocabulary, even among managers, was the biggest obstacle to getting everyone on the same page. Eventually one of the technical staff was forced to resort to drawing pictures while forbidding the use of the term "database" altogether. However, this did not occur until the pressure to display progress at a meeting with the funding agency compelled the managers to elevate the importance of unaddressed problems.

In the mean time, because the whitepaper was clear on the fact that file management would happen, technical implementation began. Rather than start from scratch, an enterprise content management system using the open source/community version of Alfresco for document management was selected. It was seen as a bonus that the groupware layer of the Alfresco software, Alfresco Share collects together a number of tools in common use by open source software development teams without imposing tools specific to the task of software development (i.e., it contains a document library, wiki, forum, and calendar components, but not a version control or software configuration management component.) Because these tools accompanied the document management component of Alfresco, they were made available to the team.

Adoption of the toolset was nearly a total failure. This is partially because a faction of the scientists regarded the system as a data sharing site, and most of the data had already been shared by the time the site was delivered. The primary reason for the late arrival was that the need for such a system was not recognized until year two. At that point, time was required to design a strawman system, write the whitepaper, get it approved, then wait for permission from the agency's IT department to set up the server outside the organizational firewall. By this time, there was no pressing need to share data, so more time was taken to learn about security as well as the means to integrate with the agency's existing user accounts (i.e. Active Directory). When the system was rolled out, there were no voluntary users. They had addressed their file sharing needs in alternate ways in the interim, and they saw no value in collecting files in a common area after the data had been shared.

After the system became available, there was an "incident" with one of the subteams. The cause of this "incident" was declared to be a lack of communication, and the offenders were pressured to "use the website", by which management meant the "extra" groupware applications which accompanied the Alfresco document management solution. This was ineffective because only the offenders were pressured to use the website. In essence, this caused a flurry of requests for accounts, but once the offenders logged in and discovered an empty wiki, empty forum, empty calendar, etc., they simply remained silent and did not return. In this case, however, forcing a single subteam to communicate on the website could not have solved the problem that the subteam did not communicate well outside their small group. They simply would have been using the computer to talk to themselves since there was no one else from the team on the site.

The final phase of implementing the enterprise content management system was the definition of the parameters by which the files could be searched. This task was originally going to be accomplished by soliciting feedback on a draft version of the plan from stakeholders within the community. Time passed, deadlines approached, and managers suggested that domain specialists from multiple subteams meet (via videoteleconference) and agree on a set of parameters which would be used without first soliciting feedback from external parties. No one was given the responsibility of making this meeting happen, more time passed and the deadlines grew closer. Three of the investigators talked together on an audio teleconference and changed the objective to something more familiar and more comfortable, which was perceived to be easier: now instead of managing raw data files, a single large spreadsheet would record a consistent set of measurements for each experiment. The three managers assembled a set of columns as a starting point for further development by others. Following this change, an individual was designated responsible for coordinating a single-subteam meeting where the columns were formalized.

As a result, a fourth distinct activity was created. It also was covered by the blanket term "database". This activity consisted largely of cutting and pasting information from tables in the journal articles being written, to the correct column in the master spreadsheet. Managers expected to see significant progress within a week's time, and this expectation was realistic. This managerial action further muddied the waters because it became unclear whether this new activity satisfied whatever definition of "database" was currently in fashion. It was clear that a new direction was being taken, but whether or not the original intent to manage files still applied as well became shrouded in doubt.

In effect, managers simultaneously defined an additional activity and an extra set of parameters not proposed in the whitepaper, while neglecting to agree on parameters necessary to complete a function which was specified in the whitepaper. The set of columns in the spreadsheet represents a consistent description of individual experiments (one experiment per row.) This is incompatible with the vocabulary needed to describe files, which typically contain groups of experiments. For instance, to accomplish the activity proposed in the whitepaper, managers needed to agree on a vocabulary capable of describing (and permitting search for) spreadsheets containing data for multiple experiments, journal articles which contain data from multiple experimental trials, etc. The unclear motivation for abruptly changing direction like this became a source of doubt about the fate of an activity they had proposed, won money for, and on which they spent a substantial sum.

One final intersection with collaborative technology which was consistently used by the team is their use of email. Email was used to coordinate meetings, resolve disputes, exchange data, and come to a common understanding of data sets. The team's use of email consisted of manually assembling a list of likely recipients onto the "To" and "CC" fields. Individuals may or may not have defined a "group" in their mail clients, but this group was not held in common. Mailing list software was not used, and the project did not maintain an archive of emails relevant to the project.

Analysis
A few key points may be distilled from the above narrative as well as observation:


 * Email is a collaboration technology with which the team is comfortable.
 * email represents the lowest-effort, easiest target for improvement in team tools, simply by providing an email list for the project which is archived.
 * adding an email list archive viewer to the "project space" in Alfresco (or similar tool) could be the bridge to a familiar technology which increases adoption
 * There was no "one-stop-shopping" place to collect all things (files, planning, documentation, discussions about data, etc.) related to the project. The provision of a tool capable of providing one stop shopping:
 * was an afterthought because it was bundled with software deployed in pursuit of a deliverable
 * was not advertised as anything more than a file sharing solution
 * was not adopted as the primary means of interaction among team members
 * would have been incomplete due to the exclusion of email communication
 * Videoteleconferencing was employed with limited success
 * Not available at all sites, so some travel is still required
 * Adds to the scheduling hassle: In addition to finding a time convenient for all people, must find a common time when the equipment at all sites (as well as the rooms in which they live) is available.
 * All players were uncomfortable making decisions for the entire group, likewise they did not feel empowered to call a meeting involving more than just their subteam.
 * No one was a subject matter expert in all facets of any decision.
 * Confidently deciding who to include and who to exclude from any given meeting on any given topic was rare.
 * There was no mechanism to force the distributed groups to meet, particularly on topics they did not regard as important.
 * Scientists were reluctant to post data on the project website, even when directed to:
 * represents a change from "knowing who received your data and when" to "making it generally available to the team at their leisure"
 * loss of a measure of control
 * mindset is "data is owned by an individual" and not "data is contributed to and owned by the team"
 * fear and suspicion that a team member will publish "my" data first without giving "me" credit.
 * The flexible and inconstant use of terms
 * masks differences between fundamentally different courses of action
 * hides the need to make decisions about how to spend time and money
 * detracts from the ability of the assembled people to act with focus as a team
 * One subteam manager, a senior scientist, expressed a long held belief that distributed teams are doomed to failure and that collocation is the key to success. He learned that from his mentor at the beginning of his career.

Comparison to successful open source projects
Open source projects successfully manage their efforts with collaborative software on a routine basis with resounding success. The composition of teams working on open source software is comparable to teams of scientists working on distributed projects. Specifically, a common pattern is that distributed subteams of geographically collocated workers (with subteams working for different corporations) combine forces to cooperate on a common project. If geographical distribution were the key factor, open source projects would be at a severe disadvantage because they have a greater tendency to be global. This section attempts to tease out the differences between the team structures in order to form hypotheses about what could be changed in order to increase the chances of success. The summary table below is followed by discussion of specific factors requiring more detailed attention.

Decision Making / Consensus Gathering
One of the things expected of any open source project is a clear description of how decisions are made. Successful open source projects all have a makeshift or formal charter which is commonly held. At one extreme, the Apache software foundation is a formal nonprofit organization with bylaws in addition to a public description of how cooperation works within the organization. Comprehensive descriptions are presented, describing how to handle many situations which have arisen over the years. These rules apply to all projects for which they provide support and ownership. At the other extreme is a single-person project picked at random from Codehaus, for which potential contributors are directed to email the project owner. Clarity is preserved at all scales.

Here is an excerpt, describing how the Apache foundation expects their distributed members to make decisions (note that the following text is quoted with permission under the provisions of the Apache Software License, which applies only to the following block of text, unless the "fair use" provision of copyright law could be seen to permit redistribution of this portion of the text under the licenses imposed by Wikiversity): Projects are normally auto governing and driven by the people who volunteer for the job. This is sometimes referred to as "do-ocracy" -- power of those who do. This functions well for most cases. When coordination is required, decisions are taken with a lazy consensus approach: a few positive votes with no negative vote is enough to get going. Voting is done with numbers:


 * +1 -- a positive vote
 * 0 -- abstain, have no opinion
 * -1 -- a negative vote

The rules require that a negative vote includes an alternative proposal or a detailed explanation of the reasons for the negative vote. The community then tries to gather consensus on an alternative proposal that resolves the issue. In the great majority of cases, the concerns leading to the negative vote can be addressed. This process is called "consensus gathering" and we consider it a very important indication of a healthy community.

The team described in this lessons learned document lacked any method to make decisions, and when decisions were to be deferred, it lacked any method to gather consensus from its members which was less drastic than calling a meeting involving everyone. It also lacked a mailing list which included all members. However, assuming the presence of a project email list, this simple, proven means for gathering consensus could be borrowed from the Apache foundation.

Scope
Open source projects tend to be narrow in scope, such that the people working on them are conversant with most or all aspects of the relevant issues. There are two factors which push a team of scientists to construct a large proposal which no one on the team fully understands: pressure to submit cross-domain collaborative proposals in the first place; and the requirement to combine similar cross-domain collaborative proposals with competing teams.

Increased scope of the final project is a large factor driving complexity: there are more specialities involved, more teams which are more geographically distributed, and less familiarity with teammates.

Motivation
Scientists, and the organizations which employ them, are typically evaluated by the number of publications for which they get credit. Publications tend to be personal, isolated labors rather than team efforts, simply because the act of writing is a solitary endeavor. Scientists tend to view projects as publication-enabling funding, whereas the funding agencies concentrate on the deliverables. In the case of large teams which are compelled to work together, the deliverables may be the only reason for interaction at the end of the project.

Open source projects may have teams from competing corporations working toward a common goal. These corporations either need the software for some internal purpose or must add functionality for clients. Either way, improving the item which is held in common is the thing which benefits them.

Ownership and Trust
A team of scientists takes great pains to retain individual ownership over contributions to the project. This is because their primary motivation is to maximize the number of publications on which their name appears. Retaining individual ownership is a means of enforcing and ensuring the attribution of credit. By retaining individual ownership, each person knows who requested access to their data and can verify that they were appropriately credited if their data was used. Scientists fear failure to get credit to the point that placing their data on a protected team site is something that requires much convincing.

Open source projects, on the other hand, generally require some form of agreement that each contributor assigns ownership to some neutral party which holds the project in common. Some agreements are more formal than others. As an example, see the Apache contributor agreement. Freed from the need to ensure they get personal credit, contributors need only trust that other team members will not do things detrimental to the project. This level of trust is generally required before a contributor is put in a position where they can affect the project at all.

Project Resources
Scientists expect to bring resources to the table with them. There is generally no thought given to "overhead" like a coordination website or email lists. Such resources are typically unnecessary complications when the project is housed entirely in a single building under a single supervisor. This current project may have been easier to manage if there was a central "one-stop-shopping" site for all things relevant to this project. It is possible to carry this concept too far: it is not desirable to suggest that there be one site set up to handle all projects for all scientists. This is only to suggest that everything related to a single effort should be put in a single place.

In the open source world, the "overhead" is typically provided by a neutral party. In the case of Apache, "overhead" support is provided by the same entity to which all contributions are assigned. In the case of Sourceforge, free hosting is provided without the need to donate your efforts to them. In all cases, though, the presence of such resources is ubiquitous to the point where it is taken for granted. If email list software were to disappear tomorrow, all open source projects would be instantly crippled.

Conclusions and Recommendations
The issues identified in this document fall into two categories: those which teams of scientists are likely to be capable of addressing, and those which are not. Recommendations will be made for how scientists could better approach issues which are under their control. Where possible, suggestions to mitigate external impacts will be presented.

The key words MUST, MUST NOT, REQUIRED, SHALL, SHALL NOT, SHOULD, SHOULD NOT, RECOMMENDED, MAY, and OPTIONAL in this recommendations section of this document are to be interpreted as described in RFC 2119.

Formalizing a decision making process
The minimum action scientists MUST take if they are to facilitate the management of projects by consensus is to define a mechanism by which the team will make consensus decisions, as well as a method by which consensus will be gathered. In addition, the distinction between which decisions are reserved for managers and which are consensus decisions MUST be made clear. Finally, the subset of participants whose opinions contribute to the consensus MUST be explicit (e.g., everyone needs to know whose vote counts). So, the minimum action necessary to address this need is for the team to write and agree upon a mini-charter including these points. This should not be seen as "extra work", but rather as a means to avoid frustration during the project's execution.

This mechanism SHOULD NOT be fancy or complex, but MUST be agreed upon at the outset. The example cited above, where the Apache Software Foundation leverages email discussions to form a lightweight means of review and approval is an elegant and unobtrusive means of gathering feedback on proposed directions. Such an approach:


 * removes roadblocks while keeping the entire team appraised of the status of efforts of other subteams
 * allows any interested member the opportunity to dissent with cause while providing a vehicle for the group as a whole to resolve legitimate concerns
 * has a proven track record of effectiveness even when calling a face-to-face, audio teleconference, or video teleconference meeting is impractical because the participants are spread out inconveniently across the Earth's 24 time zones
 * allows participants to interact using a familiar, comfortable tool (email)
 * inherently breaks down the residual hierarchical structures held over from original, pre-merge proposals
 * inherently breaks down barriers due to geographical "clumping" of subteams by organization and workgroups

The default recommendation of this paper is that projects which require consensus gathering for group decision making SHOULD adopt Apache's lightweight and proven effective technique which utilizes familiar, comfortable tools and which facilitate teamwork. Alternate mechanisms to accomplish the same objective MAY be proposed and agreed upon if there is need, but such an alternate mechanism SHOULD embody the above identified characteristics.

The corollary to this recommendation is that if email is to be used to make decisions about how to spend money for which an investigator is accountable, a project-centric email list SHOULD be established, the archives of which SHOULD be included with the rest of the project's critical documents. Additionally, the archives of the email list SHOULD be made available to the widest audience for which the content is appropriate. Audiences, specified in order from "widest" to "narrowest" are: public access, team access, manager-only access. Note that it is common practice and frequently appropriate to afford wider access to the mailing list archives than is afforded to the mailing list itself (i.e., allow public access to the archive of a mailing list which only team members could participate in).

Formalizing the decision making process and identifying a consensus gathering mechanism addresses the following identified issues:
 * lack of a collaborative decision making process
 * inability to decide on a set of parameters by which project files should be searched
 * difficulty of calling a face-to-face/video teleconference meeting to come to a consensus
 * lack of will to call a face-to-face/video teleconference meeting to come to a consensus
 * isolation of subteams into noncommunicative islands of activity
 * the "sense" of team members that they are not empowered to make decisions on behalf of the team

Publishing the accumulated experience of the team
This recommendation may be satisfied with a variety of off-the-shelf tools. The previous section recommends a method to gather consensus and make decisions. This section recommends a method of recording the consensus at which interested parties arrived, publishing it for the whole team. While the previous recommendation applies to "getting people on the same page", this recommendation applies to "keeping people on the same page".

A central "one-stop-shopping" project center SHOULD be established to contain everything related to the project. This project center SHOULD accumulate everything relevant to the project as the project is executed. The purpose of doing so is to create an information resource for the entire team to use directly. Management SHOULD write a one-page summary of each proposed activity at the start of the project, explaining if and how the activity relates to the other activities. These pages SHOULD be updated as decisions change the primary function of the activities.

The "project center" tool works in conjunction with the email list tool. Communications regarding issues which are larger than a single subteam SHOULD still use the email list. After this communication occurs, the project center provides a number of options for recording whatever decision was arrived at:


 * 1) Publishing email archives, searchable for keywords, allowing a discussion thread to be followed.
 * 2) Write a "synthesis" web page which describes the issues, major discussion points and the final consensus decision.
 * 3) Alter the main "activity summary" page to reflect the decision.

This type of resource is "all-or-nothing", as illustrated by this team of scientists. Very little value will be realized unless the entire team is using the project center. A well used project center is a critical tool for keeping a large distributed team on the same page.

Providing and using a project center addresses the following issues:
 * lack of information
 * about the project itself
 * about the activities of other subteams
 * about key players (workers/nonmanagers) in other subteams
 * about how the project has changed from what was originally proposed
 * team members can research the impacts of their proposed actions on other subteams before making a proposal, leading to more informed starting points
 * provides a common place to specify the definition of terms and the scope of activities

Mitigating impact of ownership requirements
Implementing this recommendation may require tool development, although this might be a relatively simple matter of defining a workflow inside a system which allows for user-configurable workflows (such as Alfresco.) On some systems, this may be simply a matter of configuration.

A scientist's distrust of colleagues and compulsion to retain ownership of their data, regardless of the cause, is considered to be an external factor which is not amenable to change. Thus we must assume that scientists will always have trust and ownership issues while temporarily cooperating with people who are usually competing for the same pools of money. However, this behavior is destructive to team dynamics, and appears to be more destructive at larger scales than at smaller scales. This paper recommends that scientists and managers wishing to reduce distrust and increase productive team behavior augment the "project center" described above with improved access controls, access logging, and non-repudiation facilities for documents. In essence, this recommendation mitigates distrust by providing an environment where it is more difficult to "get away with" stealing data.

Characteristics of the recommended solution are as follows:


 * Files MUST be stored in a common team place, and SHOULD be a part of a "project center" if one exists.
 * Owners MUST be able to designate individual or groups of files as "restricted".
 * Restricted files:
 * SHOULD NOT be hidden from team members which have no access (i.e., the filename and descriptive metadata may be viewed by all team members.)
 * MUST be inaccessible by default to other team members (i.e., team members with no access rights cannot preview, edit, or download the file)
 * MUST have a means of allowing the owner to control access permissions, which MUST be assignable on an individual basis.
 * MUST have a means for team members to request access from the file owner, and SHOULD allow team members to request specific type of access (read-only/download access, edit access, etc.)
 * The system SHOULD maintain a list of requests for access to "restricted" files, as well as the disposition of each request by the owner. This list SHOULD be available for all team members to view.
 * The system MAY maintain an access log for each "restricted" file, which SHOULD be available for all team members to view.
 * The system MUST be neutral, tamper-resistant (or tamper-evident, at least), and reliable enough to be used for dispute resolution.