7.4 KiB
Adaptive Drill Word Diversity
Context
When adaptive drills focus on characters/bigrams with few matching dictionary words, the same words repeat excessively both within and across drills. Currently:
- Within-drill dedup uses a sliding window of only 4 words — too small when the matching word pool is small
- Cross-drill: no tracking at all — each drill creates a fresh
PhoneticGeneratorwith no memory of previous drills - Dictionary vs phonetic is binary: if
matching_words >= 15use dictionary only, if< 15use phonetic only. A pool of 16 words gets 100% dictionary (lots of repeats), while 14 gets 0% dictionary
Changes
1. Cross-drill word history
Add adaptive_word_history: VecDeque<HashSet<String>> to App that tracks words from the last 5 adaptive drills. Pass a flattened HashSet<String> into PhoneticGenerator::new().
Word normalization: Capture words from the generator output before capitalization/punctuation/numbers post-processing (the generator.generate() call in generate_text() produces lowercase-only text). This means words in history are always lowercase ASCII with no punctuation — no normalization function needed since the generator already guarantees this format.
src/app.rs:
- Add
adaptive_word_history: VecDeque<HashSet<String>>toAppstruct, initialize empty - In
generate_text(), before creating the generator: flatten history intoHashSetand pass to constructor - After
generator.generate()returns (before capitalization/punctuation):split_whitespace()into aHashSet, push to history, pop front iflen > 5
Lifecycle/reset rules:
- Clear
adaptive_word_historywhendrill_modechanges away fromAdaptive(i.e., switching to Code/Passage mode) - Clear when
drill_scopechanges (switching between branches or global/branch) - Do NOT persist across app restarts — session-local only (it's a
VecDeque, not serialized) - Do NOT clear on gradual key unlocks — as the skill tree progresses one key at a time, history should carry over to maintain cross-drill diversity within the same learning progression
- The effective "adaptive context key" is
(drill_mode, drill_scope)— history clears when either changes. Other parameters (focus char, focus bigram, filter) change naturally within a learning progression and should not trigger resets - This prevents cross-contamination between unrelated drill contexts while preserving continuity during normal adaptive flow
src/generator/phonetic.rs:
- Add
cross_drill_history: HashSet<String>field toPhoneticGenerator - Update constructor to accept it
- In
pick_tiered_word(), use weighted suppression instead of hard exclusion:- When selecting a candidate word, if it's in within-drill
recent, always reject - If it's in
cross_drill_history, accept it with reduced probability based on pool coverage:- Guard: if pool is empty, skip suppression logic entirely (fall through to phonetic generation in hybrid mode)
history_coverage = cross_drill_history.intersection(pool).count() as f64 / pool.len() as f64accept_prob = 0.15 + 0.60 * history_coverage(range: 15% when history covers few pool words → 75% when history covers most of the pool)- This prevents over-suppression in small pools where history covers most words, while still penalizing repeats in large pools
- Scale attempt count to
pool_size.clamp(6, 12)with final fallback accepting any non-recent word - Compute
accept_probonce at the start ofgenerate()alongside tier categorization (not per-attempt)
- When selecting a candidate word, if it's in within-drill
2. Hybrid dictionary + phonetic mode
Replace the binary threshold with a gradient that mixes dictionary and phonetic words.
src/generator/phonetic.rs:
- Change constants:
MIN_REAL_WORDS = 8(below: phonetic only), addFULL_DICT_THRESHOLD = 60(above: dictionary only) - Calculate
dict_ratioas linear interpolation:(count - 8) / (60 - 8)clamped to[0.0, 1.0] - In the word generation loop, for each word: roll against
dict_ratioto decide dictionary vs phonetic - Tier categorization still happens when
count >= MIN_REAL_WORDS(needed for dictionary picks) - Phonetic words also participate in the
recentdedup window (already handled since all words push torecent)
3. Scale within-drill dedup window
Replace the fixed window of 4 with a window proportional to the filtered dictionary match count (the matching_words vec computed at the top of generate()):
pool_size <= 20: window =pool_size.saturating_sub(1).max(4)pool_size > 20: window =(pool_size / 4).min(20)- In hybrid mode, this is based on the dictionary pool size regardless of phonetic mixing — phonetic words add diversity naturally, so the window governs dictionary repeat pressure
4. Tests
All tests use seeded SmallRng::seed_from_u64() for determinism (existing pattern in codebase).
Update existing tests: Add HashSet::new() to PhoneticGenerator::new() constructor calls (3 tests).
New tests (all use SmallRng::seed_from_u64() for determinism):
-
Cross-drill history suppresses repeats: Generate drill 1 with seeded RNG and constrained filter (~20 matching words), collect word set. Generate drill 2 with same filter but different seed, no history — compute Jaccard index as baseline. Generate drill 2 again with drill 1's words as history — compute Jaccard index. Assert history Jaccard is at least 0.15 lower than baseline Jaccard (i.e., measurably less overlap). Use 100-word drills.
-
Hybrid mode produces mixed output: Use a filter that yields ~30 dictionary matches. Generate 500 words with seeded RNG. Collect output words and check against the dictionary match set. With ~30 matches,
dict_ratio ≈ 0.42. Since the seed is fixed, the output is deterministic — the band of 25%-65% accommodates potential future seed changes rather than runtime variance. Assert dictionary word percentage is within this range, and document the actual observed value for the chosen seed in a comment. -
Boundary conditions: With 5 matching words → assert 0% dictionary words (all phonetic). With 100+ matching words → assert 100% dictionary words. Seeded RNG.
-
Weighted suppression graceful degradation: Create a pool of 10 words with history containing 8 of them. Generate 50 words. Verify no panics, output is non-empty, and history words still appear (suppression is soft, not hard exclusion).
Files to modify
src/generator/phonetic.rs— core changes: hybrid mixing, cross-drill history field, weighted suppression inpick_tiered_word, dedup window scalingsrc/app.rs— addadaptive_word_historyfield, wire throughgenerate_text(), add reset logic on mode/scope changessrc/generator/mod.rs— no changes (TextGeneratortrait signature unchanged for API stability; thecross_drill_historyparameter is internal toPhoneticGenerator's constructor, not the trait interface)
Verification
cargo test— all existing and new tests pass- Manual test: start adaptive drill on an early skill tree branch (few unlocked letters, ~15-30 matching words). Run 5+ consecutive drills. Measure: unique words across 5 drills should be notably higher than before (target: >70% unique across 5 drills for pools of 20+ words)
- Full alphabet test: with all keys unlocked, behavior should be essentially unchanged (dict_ratio ≈ 1.0, large pool, no phonetic mixing)
- Scope change test: switch between branch drill and global drill, verify no stale history leaks