novelty_check_summary
Novelty Check Summary - Top 3 Ideas
Idea 2: Cross-Modal Budget Allocation (CMBA)
Score: 8/10 - ✅ PROCEED
Novelty: HIGH - No prior work systematically ablates allocation ratios (vision:projector:language)
Closest work:
- Adaptive Capacity Allocation for VLA - VLA not VLM, no projector analysis
- Neural Architecture Search for Variable LoRA Rank - Uses NAS, no ratio study
Key differentiator: First to answer "what ratio should I allocate?" via simple ablation
Risk: Low - clear gap, actionable contribution
Idea 10: Dynamic Rank Adjustment During Training
Score: 5/10 - ⚠️ SIGNIFICANT OVERLAP
Novelty: MEDIUM-LOW - Multiple papers already do this
Closest work:
- DyLoRA - Dynamic search-free LoRA, trains multiple ranks simultaneously
- Hierarchical and Dynamic Rank Adaptation (HyDRA) - Dynamic rank for mobile VLMs
- Simple and Effective Post-Tuning Compression - Train high rank, compress to low rank via SVD
- Dynamic Rank Assignment (L1RA) - Prune and redistribute ranks during training
Key finding: The idea of "start high, end low" is NOT novel. DyLoRA (2022) and recent papers (2024-2025) already implement this.
Differentiation needed: Would need a unique angle (e.g., modality-aware schedule, or specific to multimodal models)
Recommendation: ❌ ELIMINATE or MERGE with another idea
Idea 4: Hierarchical Rank Allocation (HRA)
Score: 6/10 - ⚠️ PARTIAL OVERLAP
Novelty: MEDIUM - Hierarchical concept exists, but not exactly as proposed
Closest work:
- Hierarchical Dynamic Low-Rank Adaptation - Hierarchical LoRA for pruned LLMs
- Hierarchical and Dynamic Rank Adaptation (HyDRA) - Hierarchical + dynamic for mobile VLMs
- Dynamic Rank Allocation (ARD-LoRA) - Layer-wise + head-wise allocation
Key finding: "Hierarchical" allocation exists, but not specifically as "coarse-grained modality-level → fine-grained layer-level"
Differentiation:
- Existing work: hierarchical within single modality (layer → head)
- Our idea: hierarchical across modalities (modality → layer)
Recommendation: ⚠️ PROCEED WITH CAUTION - needs clear framing to differentiate from HyDRA
Final Ranking After Novelty Check
✅ High Priority (Strong Novelty)
- Idea 2: Cross-Modal Budget Allocation (CMBA) - 8/10, clear gap, actionable
⚠️ Medium Priority (Needs Refinement)
- Idea 4: Hierarchical Rank Allocation (HRA) - 6/10, partial overlap with HyDRA, needs differentiation
- Idea 5: Modality-Specific Learning Rate Scaling (MSLR) - Not checked yet, but likely novel (no overlap found in initial search)
❌ Eliminated
- Idea 10: Dynamic Rank Adjustment - 5/10, DyLoRA and HyDRA already do this
- Idea 1: MA-LoRA - Too similar to MokA/MARS
- Idea 3: ZRP - SR-LoRA already covers this
Recommendation for Phase 5: Pilot Experiments
Pilot only Idea 2 (CMBA) - it has the strongest novelty signal and lowest risk.
Budget:
- CMBA pilot: 6 ratios × 2h = 12 GPU-hours
- Well under MAX_PILOT_IDEAS=3 and MAX_TOTAL_GPU_HOURS=8 per idea
Skip pilots for Idea 4 and 5 - validate on paper only, flag as "needs manual pilot" due to novelty concerns.
Rationale:
- Idea 2 has clear novelty (8/10) and low risk - worth empirical validation
- Idea 4 has overlap with HyDRA - need to refine framing before spending GPU time
- Idea 10 is eliminated due to DyLoRA/HyDRA overlap
Key Papers to Cite
For CMBA:
- Adaptive Capacity Allocation for VLA
- Neural Architecture Search for Variable LoRA Rank
- MokA: Multimodal Low-Rank Adaptation
- Q-Former PEFT
For HRA (if pursued):
- HyDRA: Hierarchical and Dynamic Rank Adaptation
- ARD-LoRA: Dynamic Rank Allocation
- Hierarchical Dynamic LoRA for Pruned LLMs
For Dynamic (eliminated):