The three coordination taxes that quietly destroy class decks

Eight students, eight .apkg files, one merged Anki deck that does not break.

The classic class-deck recipe is simple to write down and brutal to execute. Coordinator divides the syllabus, each member cards their lectures, everyone exports a .apkg, coordinator merges into a master collection, broadcasts. The recipe sounds clean. It almost never ships clean.

Three coordination taxes are why. Note-type collisions when eight members use eight different templates. Question quality that varies by an order of magnitude across members. Image-occlusion that gets silently dropped because manual masking is too slow. This page is about a workflow that pays each tax to zero, and a recipe a three-person crew can run on a Sunday afternoon for a 12-lecture block.

M
Matthew Diakonov
9 min read

Direct answer · verified 2026-05-06

Split the lectures, generate in parallel, merge into one collection.

The core workflow: split the syllabus across members in a shared sheet, each member generates their assigned lectures' .apkg files in parallel (about 60 seconds per 90-slide deck), one coordinator imports all of them into a shared Anki collection and re-exports. The technical fix that makes the merge clean is namespaced note types. Studyly's .apkg ships with three note-type names (studyly_mcq, studyly_cloze, studyly_image_occlusion). Eight members generating exports produce exactly those three note types in the merged collection, not eight forks of "Basic" with different field orders.

Anki's official guidance on collaboration acknowledges that native support is limited and recommends a turn-based per-lesson export workflow (FAQ entry on collaborative decks). The shape below extends that guidance with the parallel-generation and namespaced-note-type changes that make it survive a real class.

Tax one: note-type collisions on merge

Anki keys cards on note types. A note type is the template that defines the fields (Front, Back, Extra, Source, etc.) and the card layouts that render those fields. When two members import .apkg files that both define a note type called "Basic" but with different field orders, the second import is supposed to surface a dialog asking what to do. The dialog is hostile, and most coordinators click through it. The result is cards from later imports rendering fields in the wrong slots: the answer in the source-citation field, the explanation in the alt-text. You do not notice for a week, because the cards still display, they just display wrong.

Multiplied across eight members, the master deck has cards from three or four collision events, each with its own subset of broken renders. The fastest path to fixing it is to drop the master and rebuild from scratch, which throws away every member's edits. A lot of class decks die at this point and never come back.

The fix is to make every member's export use the same set of note types. Studyly does this by shipping every .apkg with three namespaced note-type names: studyly_mcq, studyly_cloze, and studyly_image_occlusion. Eight members generating exports use the same three note types. The merged master has three note types. No collisions. No silently-shadowed fields.

Tax two: question quality varies by member

On any class deck, two members write great cards, three write decent cards, and three write filler that asks what year a discovery was made or what the book's third chapter is named. The bottom-tier contributions still ship into the master. The class still studies them. Time spent reviewing filler is time not spent on testable content.

The honest version of this problem is not that bad members exist, it is that hand-carding under exam pressure is hard. Even a strong student writing at 11pm produces a different distribution than the same student writing at 11am with coffee. A class deck is a stack of those distributions, half of them written at 11pm.

Auto-generation flattens the distribution. The eval rubric is 81.3 on a held-out three-document set (factual correctness, clarity, distractor quality, and question-type coverage), measured against Unattle 78.0, Gauntlet 68.0, and Turbolearn 57.8. The variance does not depend on which member ran the generation or whether they had coffee. The 1-in-5 cards that benefit from a human edit get edited by the assigned subject-matter member, which is the workflow getting the human in the right place.

Tax three: image-occlusion gets silently dropped

Image-occlusion is the single biggest reason group workflows produce decks that under-test the practical exam. A labeled anatomy diagram, a biochem pathway with structure names, a histology slide with tagged cells: these are the question types the practical actually asks. A text-only card asking "what attaches to the medial epicondyle" is a different test than a labeled humerus diagram with the medial epicondyle masked.

Manual image-occlusion in Anki takes 10 to 15 minutes per labeled diagram. A 90-slide anatomy lecture with 14 labeled figures is two to three hours of masking on top of the carding. Most members do not have that time. So the merged deck has image-occlusion on whichever lectures landed with the members who did, and text-only cards on the rest. The class's practical performance tracks which members got assigned anatomy.

Auto image-occlusion makes coverage uniform. Studyly extracts the figure off each slide, identifies the labeled structures (anatomical terms, drug names, enzyme names, cell types), and writes one image-occlusion note per label with the mask placed over the structure name. The .apkg carries the figure and the mask coordinates. After merge, the master has image-occlusion on every labeled figure across every member's contribution. Coverage is whatever the source decks contain, not whichever members had spare hours.

The two workflows, side by side

The coordination layer (assignment sheet, coordinator role, merge step) is the same. What changes is what each member does between the assignment and the export.

Each member writes cards for their assigned lectures inside their own Anki collection. An hour or two per lecture, plus image-occlusion if they have time (most do not). Members use whatever note type they prefer. Coordinator imports eight .apkg files into a master and clicks through note-type collision dialogs. Master ships with inconsistent quality, partial image-occlusion coverage, and silent field-shadowing on a subset of cards.

  • 60 to 120 minutes of carding per member per lecture
  • Note-type forks across members: 3 to 8 in the merged master
  • Image-occlusion: spotty, depends on member time budget
  • Quality variance: high, varies by who was awake on Sunday
  • Time to v1: multi-week project

The traditional split-the-lectures recipe (and where it fails)

1

Assign lectures across members in a shared sheet

Coordinator divides 30 lectures across 8 to 12 members. Each member knows which lectures are theirs. Lock the assignment so two people do not card the same lecture.

2

Each member writes cards for their lectures, individually

An hour or two of carding per 90-slide lecture, in each member's own Anki collection. Members use whatever note type they prefer, and most skip image-occlusion because manual masking is slow.

3

Members export their work as .apkg files

File menu, Export, with the deck selected and 'Include media' checked. Each member sends their .apkg to the coordinator over Discord, Drive, or a class GitHub.

4

Coordinator imports every .apkg into a master collection

Eight imports, eight chances of a note-type collision warning. The coordinator clicks through them all because the alternative is hours of manual reconciliation per collision.

5

Coordinator re-exports a master .apkg, broadcasts to the class

Everyone re-imports. The master deck now has cards where some members' work renders correctly and some renders into wrong fields. By the time anyone notices, half the class is two weeks deep into a corrupted review queue.

The same recipe with batch generation in the middle

1

Assign lectures in the same shared sheet

Coordination layer is unchanged. The sheet is still the source of truth on who owns which lecture. The change is only what each member does once they have an assignment.

2

Each member generates their lectures in parallel, in browser tabs

Drop the lecture PDF, PowerPoint, or scanned handout into Studyly. About 60 seconds per 90-slide deck produces ~200 multiple-choice cards plus one image-occlusion card per labeled diagram. Three members each running four tabs finishes 12 lectures in about four minutes of wall-clock time.

3

Each member runs a 5-minute review pass on their own lectures

Skim, edit a stem that needs correction, suspend obvious filler. The eval rubric is 81.3 on factual correctness, clarity, distractor quality, and question-type coverage. Roughly 4 in 5 cards ship as-is. The 1-in-5 that needs a tweak is faster to fix because the source slide is one click away on every card.

4

Each member exports their .apkg, hands it to the coordinator

The .apkg uses Studyly-namespaced note types: studyly_mcq, studyly_cloze, studyly_image_occlusion. Every member's export uses the same three. There are no eight-way note-type forks waiting to collide on merge.

5

Coordinator imports every .apkg into a master, re-exports, broadcasts

Same shape as the traditional flow, but the imports do not collide because every .apkg shares the same three note types. The master deck is internally consistent. Image-occlusion is present on every contribution, not just the members who had time to do it.

What ships in each member's .apkg

Every export carries the same six things. The reason this matters for a group is that "the same six things" means the coordinator is not reconciling eight different shapes of contribution at merge time.

The contents of one member's lecture .apkg

  • Multiple-choice cards with realistic distractors drawn from the lecture's own context (about 200 cards from a 90-slide deck).
  • Cloze-deletion cards for high-yield definitions and key terms, suitable for groups whose existing routine is cloze-only.
  • Image-occlusion cards from any labeled diagram on a slide. Mask sits on the labeled structure, the rest of the figure is visible. Carries through .apkg into Anki as a standard image-occlusion note.
  • Case-style stems that build short clinical scenarios from the lecture content. Useful for the NBME-style class exams classes increasingly write.
  • Source-slide reference on every card pointing to the slide number in the original PDF, kept inside an Anki field for the explain step.
  • Studyly-namespaced note types (studyly_mcq, studyly_cloze, studyly_image_occlusion). Eight members merging eight .apkg files produce exactly three note types in the master collection.
cardio_block_merge.session

Real shape of a 12-lecture cardio block run by three members in parallel. Three minutes of generation per member, five minutes of review per lecture, ten minutes of merge. Image-occlusion present across 100% of labeled figures because every member's export carries it. The master broadcasts to the class roughly 70 minutes after the first member started.

Where each workflow wins

The hand-carded path is not extinct. There are cases where it is still the right call: a deck with a strong subject-matter editor who wants tight control over every stem, a class small enough that the coordination tax is negligible, a course where the lectures are pure prose and the cards are mostly cloze. For those, the table below is not the recommendation; the table is for the more common case of a 30-person preclinical class trying to ship a deck for next Friday's exam.

Same coordination layer. Different middle step. The two are not opposed; many classes use Studyly for class material and AnKing or Zanki for boards content alongside it.

FeatureHand-carded splitStudyly batch generation
Time per lecture, per member60 to 120 minutes of hand-carding for a 90-slide deck, plus 10 to 15 minutes of image-occlusion per labeled diagram (most members skip this).About 60 seconds of generation, plus a 5-minute review pass. Image-occlusion is automatic.
Note types in the merged collectionOne note type per member who used a custom template, often three to eight different types after a class merge. Field-order mismatches silently shadow each other on import.Exactly three: studyly_mcq, studyly_cloze, studyly_image_occlusion. Every member's export uses the same set.
Question quality consistency across membersHighly variable. The members who write good cards write great cards; the members who do not write filler. Quality is whoever was most awake on Sunday afternoon.Consistent baseline of 81.3 on a held-out three-document rubric (factual correctness, clarity, distractor quality, question-type coverage), measurable per export.
Image-occlusion coverage on the merged deckSpotty. Most members skip it. Anatomy, histology, biochem pathways end up under-cared on the very subjects where the practical is image-based.Automatic for every labeled figure on every slide, across every member's export. Coverage is whatever the source decks contain.
Time from semester start to deck v1A multi-week project. Most class decks ship the first version mid-block, after 30+ hours of distributed effort.Sunday afternoon. 12-lecture block in roughly 90 minutes for two members in parallel, including review and merge.
Maintenance when professors add a lecture mid-semesterEither someone hand-cards the new lecture in 60-120 minutes, or it never gets carded.Assigned member runs the new PDF, exports, hands the .apkg to the coordinator. Master deck updates in under 10 minutes.

Honest limits

Cards are bounded by the source. If your professor's slides are thin, the cards from those slides will be thin. The generator stays close to the deck so the questions match what the professor will ask, which is the right tradeoff for class exams and the wrong tradeoff if you want broader textbook context. A class deck for textbook material wants a different upload (the chapter PDF, not the slides).

Image-occlusion quality depends on the figure. A clean labeled diagram from a publisher textbook produces a clean masked card. A photocopy of a photocopy produces a card a member might want to redo by hand. The trick is to assign the photocopy lectures to a member who is willing to redo a few cards.

Auto-rephrasing on revisit is a Studyly feature, not an Anki feature. When the class studies the merged .apkg inside Anki, they study the canonical card set with Anki's scheduler. The wording rotation only happens when a member studies inside Studyly. Pick the path that matches the class's existing rhythm; both work.

Studyly is cloud-only. If your school requires fully-offline tools for exam prep, this is not the right pick.

Test it on one block first

Drop one cardio block. Three members, parallel tabs, master deck by dinner.

Free tier on app.jungleai.com, no credit card. Each member gets full export of their assigned lectures including image-occlusion. The merge into the master collection is clean because the .apkg files share three namespaced note types.

Common questions about running a class Anki deck as a group

What is the actual workflow when a study group splits an Anki deck?

The classic recipe is: pick a coordinator, divide the syllabus across members (one or two lectures each), each member makes cards for their assigned lectures inside their own Anki collection, then exports a .apkg per lecture and shares the files. The coordinator imports each .apkg into a master collection and re-exports a combined .apkg, which everyone re-imports. The workflow works on paper. It breaks in practice because eight people writing cards on different note templates produce a master deck where half the cards are missing fields, image-occlusion is inconsistent, and quality varies by an order of magnitude across members.

Why do note-type collisions break a merged class deck?

Anki keys cards on note types (the template that defines fields like Front, Back, Extra). When two members import .apkg files that both define a note type called 'Basic' but with different field orders or different fields, the second import is supposed to be flagged, but a lot of users click through and end up with cards whose fields render in the wrong slots. Multiply that across eight members and the master deck has cards where the answer renders in the source-citation slot, or images render as alt text. Studyly's .apkg ships with a Studyly-namespaced note-type set (studyly_mcq, studyly_cloze, studyly_image_occlusion). Eight members all generating exports use the same three note types, so the merged collection has exactly three note types, not twenty-four.

Can a group just use AnkiHub for this?

AnkiHub is real and it is good at what it does, which is community-maintained Step decks with a suggestion-and-approve workflow. The pricing is $6/month Core, $10/month Premium, $450 lifetime, per member. For a 30-person class deck that exists for one semester and then does not, the per-seat cost adds up faster than people expect. The bigger issue is upstream of cost: AnkiHub still expects each contributor to write the cards, so it solves the distribution problem but not the writing problem. Studyly addresses the writing problem (each member gets a 60-second-per-lecture export) and the distribution problem (Anki .apkg files merge cleanly because of the namespaced note types). The two are complements; some classes use both.

How long does parallel generation actually take for a 12-lecture block?

If three members each take four lectures and run them through Studyly in their own browser tabs, total wall-clock time is about four minutes (since each 90-slide deck takes 60 seconds and members run in parallel). The longer step is the human review pass per member, which is where the value of having one specific person assigned to one specific lecture pays off: the assigned person spots subject-matter errors that the auto-generation cannot. Realistically, for a 12-lecture block, two members with parallel review get a publishable shared .apkg out the door in 90 minutes including the merge.

Does the export carry image-occlusion across the merge?

Yes, and this is the failure mode that wrecks the most class decks. When members hand-write cards, image-occlusion takes 10 to 15 minutes per labeled diagram, so most members skip it. The merged deck ends up with text-only cards on anatomy, biochem pathways, and histology, which are exactly the subjects where the practical exam is image-based. Studyly extracts the figure off each slide, identifies labeled structures, and writes one image-occlusion note per label with the mask placed over the structure name. The .apkg carries the figure and the mask coordinates. After merge, the master deck has consistent image-occlusion across every member's contribution, not whichever members felt like doing it.

What happens if two members make cards for the same lecture by accident?

Anki's import treats every note as identified by a hash of its fields, so two near-identical MCQ cards from different generations import as two notes. You get duplicates. Studyly's review pass before export shows a duplicates warning if the coordinator imports two copies of the same lecture's .apkg into the master collection; the fix is to suspend or delete the older import. The coordination fix is simpler: lock the lecture-to-member mapping in a shared sheet before generation starts.

Can each member edit cards before merging, or does it have to be the auto-generated set?

Edit before merging. Each member runs their lectures through Studyly, opens the deck in the browser, edits stems that need correction (the eval scored 81.3 on a held-out three-document rubric, meaning roughly 4 in 5 cards ship as-is, the rest benefit from a 30-second tweak), then exports the .apkg. Editing inside Studyly is faster than editing inside Anki because the source slide is one click away from every card. The exported .apkg carries the edits.

How do you keep the merged deck in sync as new lectures are added during the semester?

Add lectures incrementally. Each week, the assigned member generates the new lecture, exports its .apkg, and shares it. The coordinator imports just that .apkg into the master, and re-shares the new master. Anki's filtered-decks feature plus a tag like added:7 lets the coordinator extract just this week's contributions for sanity-checking before broadcasting. You do not need to regenerate the whole deck to add one lecture.

What does the lecture-to-member assignment look like in practice?

A simple sheet works: Lecture, Assigned member, Generated (Y/N), Reviewed (Y/N), Exported (Y/N), Merged (Y/N). The coordinator is responsible for the merge column. Each row tracks one .apkg from generation through to landing in the master deck. The point of the sheet is not project management, it is so nobody generates the same lecture twice and nobody assumes someone else has already done lecture 7.

Is there a free path so we can test this before committing the whole class?

Yes. The Studyly free tier on app.jungleai.com covers full lecture generation including .apkg export, no credit card. The honest test for a class is: pick one block (six to ten lectures), assign them across two or three members, run the workflow end-to-end, and see how the master deck feels in week one of review. If the cards play in your existing Anki rhythm, scale up. If they do not, you have lost a Sunday afternoon, not a semester.