Model 3 uses the coding scheme of Kern, Brett, Weingart, and Eck (2020). The simulation used was Towers Market (negotiationandteamresources.com). The authors shared with us 32 transcripts they had collected and coded using human coders. Their coding scheme began with 37 codes but was later reduced to four strategic categories for their analyses made up of: integrative versus distributive X information seeking versus information sharing. In our Model 3, we include two versions: a longer version (based on the 37 codes, reduced to 26 codes) and a shorter version (based on the four strategic clusters, plus an “other” category). All analyses provide results for both versions.


Each of the codes is shown in the section below. For each of the 26 codes, we provide a definition of the code, a short explanation, and sample sentences from the transcripts. The sample sentences let you see how these scholars operationalized their codes, which is what Model 3 learned from and tries to reproduce when coding your transcripts. For the 5 codes, we provide a definition and show which of the 26 codes were assigned to each of these 4 strategic clusters and “other”.


As with any coding scheme, different scholars might operationalize concepts slightly differently. You should decide if this coding scheme will be useful to you by reviewing how the authors used it.


When reporting your results from this model, please cite this paper:


Friedmann, R., Zhan, X., Tyagi, S., Brett, J., Hooper, M., Babbit, K., Acharya, M., 2026, Coding Negotiations with AI: Instructions and Validation for Coding Model 3. Click here to see paper.

Below we have listed each of the codes used in our 26-code and 5-code version of the coding scheme used by Kern et al. (2020).

Kern et al. (2020) started with a 37-code scheme. The coding manual for the initial 37-codes can be found (here). To make coding feasible for our Model 3, we combined some codes, producing our 26-code version.

In their 2020 study, Kern et al. combined their codes into four “strategic categories” (Integrative Information, Integrative Action, Distributive Information, Distributive Action). A table from Kern et al. (2020) explaining these four categories is provided (here). Our Model 3 also reports these four codes (along with a fifth “excluded” category for codes that do not fit into the strategic categories).

Example sentences for each of the 37 codes are provided (here). The purpose of providing these sample sentences is to let you see how these scholars had their coders operationalize the codes. This way of thinking is what Model 3 has learned and is trained to reproduce.

A spreadsheet showing code definitions and how the 37 codes maps onto the 26 codes and the 5 codes is provided (here)

26 Code Version
Code Code Name
OS

Description: SINGLE-ISSUE – secure agreement on one issue.

  • Make an offer focused on a single issue.
  • Try to secure agreement on that one issue.
OM

Description: MULTI-ISSUE – secure agreement on two or more issues.

  • Make an offer that covers 2+ issues (a package).
  • Try to secure agreement across multiple issues.
IP

Description: ISSUE PREFERENCES – within a single issue.

  • State preference on one issue (e.g., option/level you want).
IB

Description: BOTTOM-LINE – within a single issue or for a package.

  • State minimum/maximum acceptable threshold.
  • Can apply to one issue or an entire package.
IR

Description: PRIORITIES – relative importance of issue(s).

  • Rank issues or state what matters most/least.
SB

Description: DEFENDING ARGUMENTS – argue position on issue.

  • Provide reasons/justifications for your position or offer.
SF

Description: FACTUAL STATEMENTS – facts or task clarifications that are specific/true.

  • State concrete facts, constraints, or clarifications about the task.
QO

Description: Clarification or relating to an offer.

  • Ask for clarification about an offer or how it applies.
QR

Description: Ask about relative importance of issue(s).

  • Ask which issues matter most/least to the other party.
QP

Description: Ask about preferences within a single issue.

  • Ask what option/level they prefer for an issue.
QB

Description: Ask for bottom line within an issue or package.

  • Ask their minimum/maximum acceptable threshold.
QS

Description: Question/clarification of argument presented.

  • Probe or challenge the reasoning/justification provided.
QM

Description: Ask miscellaneous task-related questions.

  • Ask task/process clarifications not covered by other question codes.
IN

Description: Summarizing others’ interests.

  • Accurately restate what the other party cares about / is trying to achieve.
MU

Description: Noting mutual interests.

  • Point out shared goals or aligned interests.
SIM

Description: Noting similarities related to preferences, priorities, or other aspects.

  • Note where preferences/priorities/positions match.
DIFF

Description: Noting differences related to preferences, priorities, or other aspects.

  • Note where preferences/priorities/positions differ.
COER

Description: Coercion/threats or showing ability to dominate.

  • Threaten, pressure, or reference power/alternatives to force compliance.
AGR

Description: Express agreement with an offer/statement, or acknowledge what the other says.

  • “Yes / I agree / that works” or simple acknowledgment.
DIS

Description: Express disagreement with an offer or statement.

  • Reject/oppose an offer or claim without necessarily making a counteroffer.
INTPROC

Description: Offer a compromise/trade-off/exchange with the other party.

  • Propose “if you give X, I can give Y” process-level trade-offs.
P1

Description: Deal with one issue at a time.

  • Suggest focusing on a single issue before moving on.
PM

Description: Moving on without resolution.

  • Propose switching topics despite no agreement yet.
PT

Description: Time checks.

  • References to time remaining, deadlines, pacing, etc.
MISC

Description: Misc comments that do not fit into other categories.

  • On-topic content that doesn’t match any other code definition.
CS

Description: Potential solutions outside boundaries of task.

  • Suggest novel options not explicitly listed in the negotiation task.
5 Code Version
Label Strategy
Behavior Included in Strategic Cluster
  • States issue preferences
  • Asks for preferences
  • Provide information about bottom line
  • Ask for bottom line
  • Makes statements about facts or task clarification
  • Note differences in preferences and priorities
  • Note task differences
Distributive Information Expression and Exploration
Behavior Included in Strategic Cluster
  • Substantiation of position
  • Make single-issue offer
  • Disagree with statement
  • Ask about others’ substantiation
  • Disagree with offer made
  • Refer to power
Value Claiming
Behavior Included in Strategic Cluster
  • Acknowledgement without agreement
  • Agreement with offer
  • Agreement with statement
  • Process suggestion to address one issue at a time
  • Shows insight (summarizes others’ interests)
  • Notes task similarities
  • Notes similarities in preferences and priorities
  • Notes mutual interests
  • Suggest moving on without resolution
Integrative Information Expression and Exploration
Behavior Included in Strategic Cluster
  • Ask questions about offer
  • Provide information about issue priorities
  • Make multi-issue offer
  • Suggest package trade-off
  • Suggest compromise
  • Ask for priorities
  • Suggest reciprocity – concession now in exchange for future concession
Value Creating
Behavior Included in Strategic Cluster

Not included in strategy clusters.

Not included in strategy clusters

Transcripts are coded in three steps:

1. Unitization (you need to do this): The model provides one code for each set of words or sentences that you identify as a unit in your Excel document. You can choose to have units be speaking turn, sentences, or thought units. The easiest to set up is speaking turns, since switching between speakers is clearly identifiable in transcripts. The next easiest is sentences, since they are identified by one of these symbols: .?! However, different transcribers may end sentences in different places. The hardest unit to create is the thought unit since that takes careful analysis and can represent as much work as the coding itself. (See the NegotiAct coding manual3 for how to create thought units.) Clarity of meaning runs the opposite direction. The longer the unit, the more likely there are multiple ideas in the unit, and less clarity for human or AI coders to know what part to code. Aslani et al (2014) coded speaking turns, but 72% of their speaking turns contained just one sentence. The closest alignment with the training data would be for you to use sentences as the unit.

2. Model Assigns Code: The model assigns a code to each unit you submit, based on in-context learning. Coding is guided by the prompt we developed and tested. For more on in-context learning see Xie and Min (2022). Our prompt for this model includes several elements:

  • Five fully coded transcripts. These transcripts were chosen from the 75 available transcripts in the following way. First, any combination of five was considered only if that set included all 13 codes. Second, five of those combinations were chosen at random to test. Third, the one that produced the highest level of match with human coders was retained.
  • Instructions to pay attention to who was speaking, such as “buyer” or “seller”.
  • Instructions to pay attention to what was said in the conversation before and after the unit being coded.
  • Supplementary instructions about the difference between “substantiation” and “information” since in early tests the model often coded substantiation as information, and vice versa. This confusion is not surprising since substantiation usually comes in the form of providing information, but with the purpose of supporting a specific offer or demand.
  • Additional examples of any codes where the five training transcripts did not contain at least 15 examples. We created enough additional examples (based on our understanding of the code) to bring the examples up to 15. We needed to add 12 examples of multi-issue offer, 12 examples of offer rejected, and 14 examples of Miscellaneous Off-Task.

3. We Run the Model Five Times: We automatically run the model five times, to assess consistency of results. As expected the results are not always the same, since with in-context learning the model learns anew with each run and may learn slightly differently each time. Variation is also expected since some units may reasonably be coded in several ways. By running the coding model five times, we get five codes assigned to each speaking unit. If three, four, or five of the five of the runs have the same code, we report the code and indicate the level of “consistency” of that code (three, four, or five out of five). If there are not at least three consistent results out of five runs, or if the model fails to assign a code, we do not report a model code. In these cases, the researcher needs to do human coding.

See pages 4-12 of the model introduction paper: Click here to see paper

Set up your transcripts for analysis by putting them into an excel sheet. Files must not be longer than 999 rows (if you have longer transcripts, split them to make smaller files). The format should be as shown below. Label the first column “SpeakerName” and list whatever names you have for those speakers (e.g., buyer/seller, John/Mary). Label the second column “Content” and include the material that is contained in your unit of analysis (which may be a speaking turn, a sentence, or a thought unit). Also include columns for "ResearcherName", "Email", and "Institution" (often a university) and include that information in the next row. Note that there is no space in the headings “SpeakerName” and “ResearcherName.”

If you use speaking turns then speakers will alternate, and the format will look like this:

SpeakerName Content ResearcherName Email Institution
Buyer Words in a speaking turn… Your Name Your Email Your Institution
Seller Words in a speaking turn…
Buyer Words in a speaking turn…
Seller Words in a speaking turn…
etc. Words in a speaking turn…

If you use sentences or thought units then it is possible that speakers may appear several times in a row, and the format will look like this:

SpeakerName Content ResearcherName Email Institution
Buyer Words in sentence or thought unit… Your Name Your Email Your Institution
Seller Words in sentence or thought unit…
Seller Words in sentence or thought unit…
Seller Words in sentence or thought unit…
Buyer Words in sentence or thought unit…
Buyer Words in sentence or thought unit…
Seller Words in sentence or thought unit…
etc. Words in sentence or thought unit…

Create one Excel file for each transcript. Name each file in the following way:

  • YourName_StudyName_1
  • YourName_StudyName_2
  • YourName_StudyName_3
  • etc.

For example, my first file would be named “RayFriedman_CrownStudy_1” and the second file would be named “RayFriedman_CrownStudy_2”, and so on.

To submit your transcript for the model to code, drag and drop one or several transcript files into the section below. If you see the files you want to code listed properly (just below the “Upload” button), click submit. Note that each time you upload new files, those will replace the previously uploaded files, and be ready to submit

Once your files are successfully uploaded, you will see a Task ID and Passcode. Be sure to save those, you cannot access them later! It will take about 3-15 minutes for Claude to process each transcript, depending on how many users Claude has at that moment. When you think it might be finished, go to the Results Retrieval Page and enter your Task ID and Passcode. If the results are ready, they will download automatically.

If you need to manually retrieve your results, you can use the Results Retrieval Page with your Task ID and Passcode.

We suggest submitting just a few files at a time, so that you can check the output before doing too many analyses. The output file will include:

  • Transcript Name
  • Speaker
  • The text (thought unit, sentence, or speaking turn)
  • The code assigned to that text
  • Consistency score for that code
  • 1 This project was later expanded and published (but did not use the coding) as Aslani, S., Ramirez‐Marin, J., Brett, J., Yao, J., Semnani‐Azad, Z., Zhang, Z. X., ... & Adair, W. (2016). Dignity, face, and honor cultures: A study of negotiation strategy and outcomes in three cultures. Journal of Organizational Behavior, 37(8), 1178-1201.
  • 2 Weingart, L. R., Thompson, L. L., Bazerman, M. H., & Carroll, J. S. (1990). Tactical behavior and negotiation outcomes. International Journal of Conflict Management, 1, 7-31; Gunia, B. C., Brett, J. M., Nandkeolyar, A. K., & Kamdar, D. (2011). Paying a price: Culture, trust, and negotiation consequences. Journal of Applied Psychology, 96, 774-789; Adair, W. L., & Brett, J. M. (2005). The negotiation dance: Time, culture, and behavioral sequences in negotiation. Organization Science, 16, 33-51.
  • 3 In the supplementary file for: Jackel, E., Zerres, A., Hamshorn de Sanchez, C., Lehmann-Willenbrock, & N., Huffmeier, J. (2022), “NegotiAct: Introducing a comprehensive coding scheme to capture temporal interaction patterns in negotiations,” Group and Organization Management.
  • 4 R Core Team (2022). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/
  • 5 Gamer, M., Lemon, J., Fellows, I., & Singh P. (2019) irr: Various coefficients of interrater reliability and agreement. R package version 0.84.1. https://CRAN.R-project.org/package=irr.
  • 6 Landis, J.R.; Koch, G.G. (1977). "The measurement of observer agreement for categorical data". Biometrics. 33 (1): 159–174. doi:10.2307/2529310.
  • 7 Fleiss, J.L. (1981). Statistical methods for rates and proportions (2nd ed.). New York: John Wiley. ISBN 978-0-471-26370-8.
  • 8 Bakeman, R. (2022). KappaAcc: A program for assessing the adequacy of kappa. Behavior Research Methods. https://doi.org/10.3758/s13428-022-01836-1