Reference · the minute math, per subtask

The real time cost of making Anki cards from one lecture deck.

The honest per-subtask breakdown for a 90-slide lecture with about 200 testable facts and 14 labeled figures. Read the slide. Distill the fact. Write the stem. Write three plausible distractors. Place masks on the labeled diagrams. Multiply by 200. The cards-per-hour ceiling and where the hours actually disappear, with a real terminal capture of the automated equivalent.

M
Matthew Diakonov
9 min read

Direct answer · verified 2026-05-15

6 to 8 minutes per MCQ. 10 to 15 minutes per image-occlusion card.

For a typical 90-slide lecture deck with about 200 testable facts and 14 labeled figures, hand-carding the full set lands between 24 and 30 hours of focused work. The MCQ cost is dominated by the distractor-writing step (90s to 3 min of the 6 to 8 minute total). The image-occlusion cost is dominated by mask placement. Skip distractors (cloze only) and the per-card minute number drops to 3 to 4, putting the full deck at 10 to 13 hours instead of 24 to 30.

The automated equivalent on the same kind of deck, captured in a real terminal run further down this page, is 58 seconds for the same 200 MCQ + 14 image-occlusion output. The point of this page is not to argue you should always automate; it's to make the minute math legible so you can decide on a specific deck whether the trade is worth it. Anki's own getting-started docs are at docs.ankiweb.net/getting-started.html.

The numbers most students don't run before deciding

0

Testable facts in a 90-slide lecture

0 min

Median MCQ build time

0 min

Median image-occlusion build time

0 hr

Full hand-built deck (MCQ + IO)

27 hours is the conservative honest number for a 90-slide lecture deck carded the way you would actually want to study it (MCQ with plausible distractors, plus image-occlusion on the labeled diagrams). The way most students hit the 12-hour-and-give-up wall is they start with rigorous MCQ, fall back to cloze-only by hour 4, skip image-occlusion entirely by hour 6, and ship a deck that's half what they intended. Then the deck doesn't get reviewed often enough because it's too long and the cards feel uneven.

Per-subtask: where the minutes actually go

Most people writing about Anki for med school give you a per-card number and stop. Most people writing time-cost numbers don't tell you which subtasks they're including. The breakdown below is what one MCQ from a single dense lecture slide actually costs, broken into the steps you can't avoid.

SubtaskRangeWhy this much
Read the slide, identify the fact60-120sSome slides carry 3 facts, some carry 0. You can't predict per slide; you have to read each one carefully.
Write the stem (the question)30-45sFluent typing. If you're rewording the slide bullet instead of copying it, closer to 45s.
Write 3 plausible distractors90s-3minThe hard part. A distractor that's obviously wrong makes the card a recognition test. This step is what most students cut first when tired.
Sanity-check against the slide20-40sFlipping back to confirm. Skipping this step is how wrong-answer cards land in your deck.
Tag, deck, format in Anki15-30sCumulative attention tax. Per card it's small; over 200 cards it's a real hour.
Build one image-occlusion card10-15 minExport figure, open IO Enhanced add-on, place mask precisely on labeled region, label structure. Slower than typing a card because mouse precision > keyboard speed.

Per-card totals: about 7 minutes for a sustainable MCQ, 12 minutes for a sustainable image-occlusion card. The full-deck math drops out from there.

The cards-per-hour ceiling is real, and it's lower than you think

Sustained MCQ output from a lecture deck caps around 10 to 12 cards per hour. That's the number a fluent med student maintains across a three-hour session with breaks. Anything above that rate, you are either cutting distractors (recognition test), cutting fact-extraction care (wrong answers in your deck), or cutting sanity-checks (worse wrong answers in your deck).

Cloze-only carding sustains around 18 to 20 cards per hour because there are no distractors. The price is that cloze tests recall of one word in context, not discrimination between near-synonyms; on a pharmacology deck where six drugs all do roughly the same thing, MCQ forces you to discriminate and cloze does not.

Image-occlusion caps around 4 to 6 cards per hour. The slowdown is not the IO Enhanced add-on; it's the mouse work of placing each mask precisely on a labeled region. A 14-figure anatomy lecture is about 2.5 to 3 hours of mask placement at sustainable pace, which is why most students skip image-occlusion entirely, even though the practical specifically tests tagged diagrams.

One real lecture deck, two paths

Anatomy I, Lecture 4. Brachial plexus. 90 slides, 14 labeled figures. You open the PDF on the left half of your screen and Anki on the right. You read slide 1, decide it's a title slide, skip. Slide 2 has two testable facts, you write two MCQs with three distractors each (14 minutes). By slide 30 you're at hour 3 and you're starting to write distractors that look almost identical to the right answer because the lecture overlaps itself. You take a break. You come back. You skip image-occlusion on the first six diagrams because each one would be another 12 minutes and the exam is in three days.

  • 27 hours of focused work for the full deck
  • Quality drops noticeably past hour 4
  • Image-occlusion the first thing cut when tired
  • Tag/deck/format tax adds up to ~1 hour over 200 cards
anatomy_lecture_4.session

Real terminal output from a 90-slide brachial-plexus deck. The first half is the automated run timestamped per stage. The bottom block is the manual equivalent at the median rates from the per-subtask table above.

Per-subtask, side by side

The same six subtasks from the breakdown above, shown next to what the same step costs in the automated path. The ratios aren't the point. The absolute hours are.

FeatureManualAutomated (Studyly)
Read the slide, identify the testable fact (per slide)60 to 120 seconds. The bottleneck on most decks because some slides carry 2-3 testable facts and some carry zero (title, references).Roughly 0.1 seconds. Layout pass + fact extractor identifies which slides carry testable facts; title and references slides are skipped automatically.
Write one multiple-choice stem (per card)30 to 45 seconds of typing for a fluent writer. Longer if you're rewording the slide's bullet point into a question instead of just lifting it.Roughly 0.25 seconds. Stem comes out of the same generator pass that produces the card.
Write 3 plausible distractors (per MCQ)90 seconds to 3 minutes. This is the part most students cut when tired, which is when the cards become recognition tests instead of recall tests.Inline with stem generation. Distractors are drawn from neighbors of the correct answer in the same lecture context, not the open web. Held-out eval distractor-quality score: 81.3 vs Turbolearn 57.8.
Sanity-check the card against the source slide (per card)20 to 40 seconds of flipping back to confirm the fact wasn't misremembered. Skipping this step is how cards with wrong answers end up in your deck.Built into the rubric gate. Cards that fail factual correctness against the source slide are regenerated, not shipped.
Build one image-occlusion card from a labeled diagram10 to 15 minutes. Export the figure, open the IO Enhanced add-on, place a mask precisely on each labeled region, label the structure. 14 diagrams: 2.5 to 3 hours.Roughly 0.5 seconds. Figure is extracted, labeled structures identified, mask placed over each label, .apkg ships the IO Enhanced cards intact.
Full 90-slide deck (~200 cards + 14 image-occlusion)Roughly 24 to 30 hours by hand at sustainable pace. Past hour 4, card quality drops; past hour 8 in one day, error rate spikes.58 seconds end-to-end on a real anatomy lecture (terminal output reproduced below). 218 cards, including 14 image-occlusion with masks intact.

The hand-carding-as-encoding argument, and when it holds

The strongest argument for hand-carding is that writing the card is itself an encoding pass. You read the material slowly, you decide what's testable, you produce a card that you've already half-learned by the time it lands in the deck. That benefit is real. It is not free, and it is only real if the cards then enter a spaced-repetition rotation.

If you hand-card a 90-slide deck (25 hours) and then run two passes through it before the exam, the encoding gain is real and the deck is doing work for you. If you hand-card the same deck and then run zero passes (which happens more often than students admit), the encoding effect is gone within a week, and the 25 hours produced no durable retention. The cards exist on disk, but they didn't get reviewed often enough to overcome the forgetting curve.

The honest rule: when the alternative to generating is "I will not card this lecture at all because I don't have 25 hours this week," generated cards win every time. When the alternative is rigorous hand-carding followed by daily review, it's a closer call, and the encoding argument has weight. The middle case (hand-card half the deck, ship a worse version, review intermittently) is the worst of both, and it's also the most common.

Where automation is wrong on time-cost

Two cases where the automated time math does not line up the way this page makes it sound.

First, board content. For Step 1, Step 2 CK, Step 3, NCLEX, INBDE, NAVLE, the community decks (AnKing, Zanki, Pepper, BlueBoxes) exist because thousands of students iterated on the same Step content for years. No generator beats that yet. The 25 hours you'd "save" by generating Step cards is illusory; you'd then spend it editing cards that AnKing already has right. For board prep, keep the premade deck.

Second, generated cards have an edit rate above zero. On the held-out eval Studyly scored 81.3 vs Turbolearn 57.8. That difference is real (the 57.8 deck has roughly 1 bad card in 3, the 81.3 deck has roughly 1 bad card in 15), and the bad cards cost 30 to 60 seconds each to fix. For 200 generated cards at a 1-in-15 edit rate, that's another 10 to 15 minutes of work. Real time. Add it to the 58 seconds.

Want the 58-second number against your own lecture deck?

Free tier, no credit card. Drop one of next week's lecture decks in and watch the per-stage timestamps. If it doesn't beat your hand-carding pace by an order of magnitude, the conversation ends.

Frequently asked

Roughly how long does it take to make Anki cards from one lecture by hand?

For a 90-slide lecture with about 200 testable facts and 14 labeled figures, a conservative honest number is 24 to 30 hours. That breaks down as 6 to 8 minutes per multiple-choice card (read the slide, distill the fact, write the stem, write three plausible distractors, sanity-check) times roughly 200 cards, plus 10 to 15 minutes per image-occlusion card (export the figure, place masks, label the regions) times 14 diagrams. Strip out the image-occlusion work and you're at 20 to 26 hours for the text cards alone. Skip the distractor work (cloze only) and the per-card minute number drops to about 3 to 4, which is roughly 10 to 13 hours for 200 cloze cards.

What's the realistic cards-per-hour ceiling for hand-made MCQ flashcards from a lecture deck?

Around 10 to 12 MCQ cards per hour, sustained over a multi-hour session. Above that rate the distractors start looking obviously wrong, which makes the card a recognition test rather than a recall test. Cloze cards run faster (roughly 18 to 20 per hour) because there are no distractors. Image-occlusion runs slower (4 to 6 per hour at best) because the masking has to be placed precisely on each labeled region. The bottleneck is not the typing; it's the few seconds you spend looking back at the slide to confirm what you remember is what the slide actually says.

Where does the time actually go? Is it the writing or the thinking?

It's the in-between work. The actual typing of a multiple-choice stem takes 30 to 45 seconds for a fluent writer. The fact extraction (reading the slide carefully enough to be sure what's testable on it) and the distractor writing each take 90 seconds to 3 minutes. A bad distractor (too obviously wrong, too obviously a synonym of the right answer, drawn from a different topic) makes the card useless the moment you've seen it once, which makes the writing of distractors the part most students cut first when they're tired. That's also when card quality collapses.

Is making cards myself worth it if the activity is the learning?

Sometimes yes, often no, and the dividing line is whether you actually study what you carded. The hand-carding-as-encoding argument only holds if the cards then enter a spaced-repetition rotation that catches the forgetting curve. If the carding session was followed by zero or one passes through the deck, the encoding effect is gone within a week, and the hours spent carding produced no durable gain. The honest math: hand-carding gives you encoding benefit during the writing window plus the cards themselves. Generated cards give you only the cards. If you'd use the saved 25 hours for retrieval practice on the generated cards, you come out ahead.

What does the automated time look like, end to end?

On a real run with a 90-slide brachial-plexus anatomy deck containing 14 labeled figures: 58 seconds to produce 204 multiple-choice cards plus 14 image-occlusion cards. The terminal output is reproduced on this page. Add 30 seconds for the .apkg export, another 30 seconds for the Anki File > Import flow, and you're at about 2 minutes from PDF on disk to a deck sitting in your collection next to AnKing. The product behind that number scored 81.3 on a held-out three-document eval where Turbolearn scored 57.8.

How does the image-occlusion time gap compare to the MCQ time gap?

The image-occlusion gap is roughly 4x wider than the MCQ gap. A hand-made MCQ takes ~7 minutes; a generated MCQ takes ~0.25 seconds (200 cards in 50 seconds is 4 per second). Ratio: about 1,700x. A hand-made image-occlusion card takes ~12 minutes; a generated one takes ~0.5 seconds (14 cards in 8 seconds is roughly 2 per second). Ratio: about 1,440x. But the absolute hours are bigger on image-occlusion: 14 labeled figures saved manually is 2.8 hours of work; 200 MCQs is 23 hours. Both are real time. Most students cut image-occlusion first, then later wonder why they keep missing tagged-diagram questions on the practical.

Does the time math change if I'm using cloze deletion instead of MCQ?

Yes, downward. Cloze cards skip the distractor-writing step, which is roughly 40 to 50 percent of the per-card time on an MCQ. A fluent cloze writer can sustain 18 to 20 cards an hour from a lecture deck. A 200-card cloze pass on a 90-slide deck lands at 10 to 13 hours. Cloze loses one thing the MCQ has, though: distractor-quality is what enforces conceptual discrimination between near-synonyms. If you mostly study from cloze, you're trading 50 percent fewer hours for somewhat shallower testing on the same fact set.

What's the time math on a YouTube lecture or scanned PDF, where there are no slides to read directly?

Worse, materially. A 60-minute YouTube lecture takes 60 minutes just to watch before you start carding. A scanned PDF with no OCR layer either needs OCR (5-10 minutes for a 90-page handout) or has to be transcribed manually as you go (an extra few seconds per card looking at the page). For the automated path, both sources pass through the same pipeline; a YouTube lecture transcript is converted with timestamps preserved on the explain panel, and a scanned PDF goes through OCR first. End-to-end automated time is roughly 90 to 180 seconds depending on length, not 75 to 100 minutes manual.

Does this mean every med student should generate every deck and stop hand-carding?

No. Two real exceptions. First, for board content (Step 1, Step 2 CK, Step 3) the premade decks (AnKing, Zanki, Pepper) exist because thousands of students iterated on the same material for years. No generator beats that yet. Second, for a class where you genuinely use the carding session as your encoding pass and follow it with spaced retrieval, the hand-carding is buying you something. The honest rule is: when the alternative to generating is 'I will not card this lecture at all because I don't have 25 hours this week,' generated cards win every time. When the alternative is rigorous hand-carding followed by daily review, it's a closer call.

How is 81.3 vs 57.8 actually measured and does that matter for time cost?

The held-out three-document eval scores each tool on factual correctness, stem clarity, distractor quality, and question-type coverage across three independent lecture documents. 81.3 is Studyly, 78.0 Unattle, 68.0 Gauntlet, 57.8 Turbolearn. It matters for time because a 57.8 card set forces you back in to edit obvious failures (wrong distractors, off-topic stems), which adds 30 to 60 seconds per bad card and eats much of the time you thought you were saving. An 81.3 card set still has cards you'll edit, but the edit rate is roughly one in fifteen rather than one in three. Methodology is on studyly.io/quality.

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.