Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft: New /export-context Command for NotebookLM Integration #3476

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

lutzleonhardt
Copy link

PR Description: New /export-context Command for NotebookLM Integration

Overview

This PR adds a new command, /export-context, which exports files from the Aider chat context to the file system. The primary goal is to facilitate integration with NotebookLM, which has a large token window but only accepts text files or code from the clipboard.

Key Features

Two Export Modes

  1. Flat Export Mode (/export-context):

    • Exports each file in the chat context (both editable and read-only) as an individual file
    • Creates a timestamped directory in the system's temp folder
    • Files have flattened paths to ensure compatibility
  2. Chunked Export Mode (/export-context <chunk-size>):

    • Combines files into chunks based on a specified line count (minimum 50 lines)
    • Treats each file as atomic (not splitting individual files across chunks)
    • Adds clear file boundary markers for easy navigation
    • Useful for large repositories that exceed NotebookLM's individual file limits

Benefits for NotebookLM Users

  • Handle Large Repositories: When a code repository is too large to be copied entirely into NotebookLM, the chunking feature allows breaking it up into manageable pieces
  • Selective Context Loading: NotebookLM allows enabling/disabling each file/chunk, giving users control over exactly which code is included in LLM/Gemini requests
  • Preserve File Structure: File boundary markers ensure that NotebookLM understands where each file begins and ends
  • Optimized Token Usage: By chunking files intelligently, users can maximize the effectiveness of NotebookLM's large token window

Usage Examples

# Export all files in chat context individually
/export-context

# Export files in chunks of 5000 lines
/export-context 5000

Implementation Details

The implementation includes:

  • Helper methods to organize the export process
  • Robust error handling for file operations
  • Clear user feedback about the export results
  • Completion suggestions for common chunk sizes

@paul-gauthier
Copy link
Collaborator

Thanks for your interest in aider and for taking the time to make this PR.

Why not use /copy-context?

@lutzleonhardt
Copy link
Author

I see these advantages:

  • with copy-context you have one bug chunk in the clipboard. In NotebookLM you are only able to paste ~10k LOCs
  • with the chunking you can upload also big codebases via the normal txt file upload into NotebookLM (you can only upload txt files, besides PDF)
  • If you upload individual files into NotebookLM you have a checkbox to decide what to include in the LLM request => i.e. I chunked the whole Avalonia UI project (5000 files) into around 20 chunks (with ~ 5000 LOCs per chunk). Then I went through all chunks (only one chunk enabled per LLM request) and asked NotebookLM to give me back the relevant files for my question/feature. Really useful to gather required context for a feature in big codebased
  • If I have repos which a bigger I use this process together with this Draft: Add read-only stub files feature #3056: first add the whole repo as stubs, create chunks out of it, upload it to NotebookLM, ask for the relevance for each individual chunk (the context window of NotebookLM is around 1Mio Tokens)
  • so basically I use this to gather the required files for a feature in big codebases
  • I will attach a video to make this process more clear

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants