docs / workflow / pushing

Push

opentraces push uploads committed traces to Hugging Face Hub as sharded JSONL files. Only committed traces are uploaded — run opentraces commit first if needed.

Options

opentraces push --private
opentraces push --public
opentraces push --publish
opentraces push --gated
opentraces push --assess
opentraces push --repo user/custom-dataset
FlagDefaultDescription
--privateoffForce private visibility
--publicoffForce public visibility
--publishoffChange an existing private dataset to public
--gatedoffEnable gated access on the dataset
--assessoffRun quality assessment after upload and embed scores in dataset card
--repo{username}/opentracesTarget HF dataset repo

--approved-only is not part of the current CLI. The supported path is commit -> push.

How Upload Works

Each push creates a new JSONL shard. Existing data is never overwritten or appended to.

data/
  traces_20260329T142300Z_a1b2c3d4.jsonl
  traces_20260401T091500Z_e5f6a7b8.jsonl   <- new shard from this push

That means:

  • Each push is atomic
  • No merge conflicts between contributors
  • Dataset history grows by shard

Dataset Card

push generates or updates a README.md dataset card on every successful upload. The card aggregates statistics across all shards in the repo, not just the current batch, so counts are always accurate.

The card records:

  • schema version
  • trace counts, steps, and tokens
  • model and agent distribution
  • date range
  • average cost and success rate (when available)

A machine-readable JSON block is embedded for programmatic consumers:

<!-- opentraces:stats
{"total_traces":1639,"avg_steps_per_session":42,...}
-->

Quality scorecard (--assess)

opentraces push --assess runs quality scoring after upload and embeds the results in the dataset card. Here's what it looks like on a live dataset:

Overall Quality 78.1% Gate FAILING Conformance 88.4% Training 89.0% RL 73.4% Analytics 55.7% Domain 84.1%

The scorecard embeds per-persona scores as shields.io badges, a breakdown table with PASS / WARN / FAIL per rubric, and a quality.json sidecar for machine consumers. See Assess for scoring details.

Visibility

SettingWho Can SeeUse Case
PrivateOnly youSensitive code or private experiments
PublicAnyoneOpen-source contributions
GatedAnyone who requests accessControlled sharing

Push Behavior by Mode

In review mode, you commit and push manually. In auto mode, clean traces are committed and pushed automatically after capture.

Export

Export to other formats is not part of the public workflow yet. The CLI exposes a hidden stub for future automation:

opentraces export --format atif  # not yet public

The schema package documents ATIF, ADP, and OTel field mappings in packages/opentraces-schema/FIELD-MAPPINGS.md. If you need to write a converter now, start from the TraceRecord / Step model definitions there.