GigaBrain CLI and REPL Interface#
The GigaBrain CLI provides a comprehensive command-line interface for interacting with the graph database. It includes an interactive REPL (Read-Eval-Print Loop), administrative commands, and batch processing capabilities.
Installation and Setup#
Building the CLI#
cargo build --release --bin gigabrain-cli
The CLI binary will be located at target/release/gigabrain-cli.
Running the CLI#
# Start interactive REPL (default mode)
./gigabrain-cli
# Show help
./gigabrain-cli --help
# Execute single command
./gigabrain-cli --execute "MATCH (n) RETURN n"
# Execute commands from file
./gigabrain-cli --file commands.cypher
# Show graph statistics
./gigabrain-cli stats
Command Line Options#
Global Flags#
| Flag | Description |
|---|---|
--help, -h |
Show help information |
--version, -V |
Show version information |
--no-history |
Disable command history |
--no-timing |
Disable query timing display |
--silent, -s |
Suppress welcome messages and prompts |
Global Options#
| Option | Description | Default |
|---|---|---|
--format <FORMAT> |
Output format: table, json, csv, plain | table |
--execute <COMMAND>, -e |
Execute single command and exit | - |
--file <FILE>, -f |
Execute commands from file | - |
--history-file <FILE> |
Custom history file location | .gigabrain_history |
--prompt <PROMPT> |
Custom prompt string | "gigabrain> " |
Subcommands#
| Subcommand | Description |
|---|---|
repl |
Start interactive REPL (default) |
exec <COMMAND> |
Execute a single command |
import <FILE> |
Import data from file |
export <FILE> |
Export data to file |
stats |
Show graph statistics |
benchmark |
Run performance benchmark |
Interactive REPL#
Starting the REPL#
./gigabrain-cli
The REPL provides an interactive shell with:
- Command history with persistent storage
- Tab completion for Cypher keywords
- Multiline query support
- Real-time query timing
- Pretty-printed table output
REPL Welcome Screen#
╭─────────────────────────────────────────────────────────────╮
│ GigaBrain Graph Database │
│ Interactive CLI v0.1.0 │
╰─────────────────────────────────────────────────────────────╯
Welcome to the GigaBrain interactive shell!
Type ':help' for available commands or start typing Cypher queries.
Use ':exit' to quit.
gigabrain>
Basic REPL Usage#
gigabrain> CREATE (alice:Person {name: 'Alice', age: 30})
(no columns returned)
0 rows returned
(Query completed in 1.23ms)
gigabrain> MATCH (n:Person) RETURN n.name, n.age
╭────────────┬─────────╮
│ name │ age │
├────────────┼─────────┤
│ (data) │ (data) │
╰────────────┴─────────╯
1 rows returned
(Query completed in 0.56ms)
Meta Commands#
Meta commands start with : or \ and provide CLI-specific functionality.
Help and Information#
| Command | Description |
|---|---|
:help, :h |
Show comprehensive help |
:stats |
Display graph statistics |
:show <type> |
Show nodes, relationships, or schema |
Configuration#
| Command | Description |
|---|---|
:format <type> |
Set output format (table, json, csv, plain) |
:timing |
Toggle timing display on/off |
Session Management#
| Command | Description |
|---|---|
:history |
Show recent command history |
:clear |
Clear the screen |
:exit, :quit, :q |
Exit the CLI |
Data Operations#
| Command | Description |
|---|---|
:export <file> |
Export graph data to JSON file |
:import <file> |
Import graph data from JSON file |
Administrative Commands#
Graph Analysis#
Structure Analysis#
gigabrain> :show nodes
Nodes (15 total):
1: Node { id: NodeId(1), labels: ["Person"], properties: {...} }
2: Node { id: NodeId(2), labels: ["Person"], properties: {...} }
...
gigabrain> :show relationships
Relationships:
1: Relationship { id: RelId(1), start: NodeId(1), end: NodeId(2), type: "KNOWS" }
...
gigabrain> :show schema
Schema:
Labels: ["Person", "Company", "Product"]
Property Keys: ["name", "age", "email", "founded"]
Relationship Types: ["KNOWS", "WORKS_FOR", "PURCHASED"]
Performance Analysis#
gigabrain> :stats
Graph Statistics:
Nodes: 15,432
Relationships: 42,156
Labels: 8
Property Keys: 23
Relationship Types: 12
Estimated Memory: 2,847,392 bytes
Backup and Restore#
Creating Backups#
# Create backup
backup graph_backup_20231201.json
# Backup with analysis
Backup created: graph_backup_20231201.json
Nodes: 15432, Relationships: 42156, Size: 12458392 bytes
Restoring from Backup#
# Restore from backup
restore graph_backup_20231201.json
# Restore status
Restore completed: 15432 nodes, 42156 relationships restored
Performance Operations#
Graph Optimization#
optimize
Graph Optimization:
✓ Memory layout optimized
✓ Index structures rebuilt
✓ Cache cleared and warmed
✓ Internal statistics updated
Vacuum Operations#
vacuum
Graph Vacuum:
✓ Removed deleted nodes and relationships
✓ Compacted storage structures
✓ Rebuilt internal indexes
✓ Freed unused memory
Benchmarking#
benchmark
Performance Benchmark:
Node Creation: 1000 nodes in 2.34ms (427,350.43 nodes/sec)
Relationship Creation: 999 rels in 3.12ms (320,192.31 rels/sec)
Query Performance: 100 queries in 1.89ms (52,910.05 queries/sec)
Graph Analysis Commands#
Connectivity Analysis#
analyze connectivity
Connectivity Analysis:
Connected Components: 3
Largest Component: 12,847 nodes
Smallest Component: 2 nodes
Average Component Size: 5,144.00 nodes
Component Size Distribution:
Size 12847: 1 components
Size 2585: 1 components
Size 2: 1 components
Performance Analysis#
analyze performance
Performance Analysis:
Basic Stats Time: 1.23ms
Average Query Time: 0.45ms
Sample Size: 100 nodes
Estimated Memory: 2,847,392 bytes (2.71 MB)
Query Time Range: 0.12ms - 1.89ms
Structure Analysis#
analyze structure
Graph Structure Analysis:
Nodes: 15,432
Relationships: 42,156
Labels: 8
Property Keys: 23
Relationship Types: 12
Average Degree: 5.46
Degree Range: 0 - 234
Most Common Degrees:
Degree 1: 3,242 nodes
Degree 2: 2,891 nodes
Degree 3: 2,156 nodes
Degree 4: 1,842 nodes
Degree 5: 1,467 nodes
Data Import/Export#
JSON Import/Export#
Exporting Data#
# Export entire graph
export-json graph_export.json
Exported 57588 items to graph_export.json
# Using meta command
:export graph_data.json
Graph exported to: graph_data.json
Importing Data#
# Import from JSON
import-json graph_data.json
Imported 57588 items from graph_data.json
# Using meta command
:import graph_data.json
Imported 57588 items from: graph_data.json
CSV Import/Export#
CSV Export#
# Export to CSV with default query
export-csv nodes.csv
Exported 15432 records to nodes.csv
# Export with custom query
export-csv relationships.csv "MATCH (a)-[r]->(b) RETURN a.name, type(r), b.name"
Exported 42156 records to relationships.csv
CSV Import#
# Import from CSV
import-csv data.csv
Imported 1000 records from data.csv
File Format Examples#
JSON Export Format#
{
"version": "1.0",
"timestamp": "2023-12-01T15:30:45Z",
"stats": {
"nodes": 15432,
"relationships": 42156,
"labels": 8,
"property_keys": 23,
"relationship_types": 12
},
"nodes": [
{
"id": 1,
"labels": ["Person"],
"properties": {
"name": "Alice",
"age": 30
}
}
],
"relationships": [
{
"id": 1,
"start_node": 1,
"end_node": 2,
"type": "KNOWS",
"properties": {}
}
]
}
Index and Constraint Management#
Index Operations#
# List indexes
index list
Available Indexes:
(Index management not yet implemented)
# Create index
index create Person name
Index created on Person:name (placeholder)
# Drop index
index drop Person name
Index dropped on Person:name (placeholder)
Constraint Operations#
# List constraints
constraint list
Schema Constraints:
Validation Rules:
(Constraint listing not yet implemented)
# Create constraint
constraint create Person name UNIQUE
Constraint creation (placeholder)
# Drop constraint
constraint drop Person name UNIQUE
Constraint removal (placeholder)
File-Based Command Execution#
Command Files#
Create a file with Cypher commands and comments:
# commands.cypher
# Create sample data
CREATE (alice:Person {name: 'Alice', age: 30})
CREATE (bob:Person {name: 'Bob', age: 25})
CREATE (company:Company {name: 'Tech Corp', founded: 2010})
# Create relationships
MATCH (alice:Person {name: 'Alice'}), (bob:Person {name: 'Bob'})
CREATE (alice)-[:KNOWS]->(bob)
MATCH (alice:Person {name: 'Alice'}), (company:Company {name: 'Tech Corp'})
CREATE (alice)-[:WORKS_FOR]->(company)
# Query the data
MATCH (p:Person)-[:WORKS_FOR]->(c:Company)
RETURN p.name as employee, c.name as company
Executing Command Files#
# Execute file in silent mode
./gigabrain-cli --file commands.cypher --silent
# Execute file with output
./gigabrain-cli --file commands.cypher
Executing commands from: commands.cypher
1> CREATE (alice:Person {name: 'Alice', age: 30})
(no columns returned)
0 rows returned
(Completed in 1.23ms)
2> CREATE (bob:Person {name: 'Bob', age: 25})
(no columns returned)
0 rows returned
(Completed in 0.89ms)
...
File execution completed:
Total lines: 15
Commands executed: 8
Errors: 0
Output Formats#
Table Format (Default)#
╭────────────┬─────────────╮
│ employee │ company │
├────────────┼─────────────┤
│ Alice │ Tech Corp │
│ Bob │ Startup │
╰────────────┴─────────────╯
2 rows returned
JSON Format#
gigabrain> :format json
Output format set to: json
gigabrain> MATCH (n:Person) RETURN n.name, n.age
{"columns": ["n.name", "n.age"], "rows": 2}
CSV Format#
gigabrain> :format csv
Output format set to: csv
gigabrain> MATCH (n:Person) RETURN n.name, n.age
n.name,n.age
value,value
value,value
Plain Format#
gigabrain> :format plain
Output format set to: plain
gigabrain> MATCH (n:Person) RETURN n.name, n.age
QueryResult { columns: ["n.name", "n.age"], rows: [...] }
Command History and Completion#
Command History#
- Automatically saves command history to
.gigabrain_history - Navigate history with up/down arrows in REPL
- Search history with
:historycommand - Configurable history file location with
--history-file
Tab Completion#
The CLI provides intelligent tab completion for:
- Cypher keywords (MATCH, CREATE, WHERE, RETURN, etc.)
- Built-in functions (count, sum, avg, etc.)
- Meta commands (:help, :stats, :export, etc.)
- Node labels and relationship types
- Property keys
Multiline Support#
gigabrain> MATCH (a:Person)-[:KNOWS]->(b:Person)
-> WHERE a.age > 25
-> RETURN a.name, b.name, a.age
-> ORDER BY a.age DESC
Performance and Monitoring#
Query Timing#
All queries show execution time by default:
(Query completed in 2.34ms)
Disable with --no-timing or :timing command.
Memory Information#
memory
Memory Information:
Estimated Graph Memory: 2,847,392 bytes (2.71 MB)
Node Storage: 15432 nodes × 64 bytes = 987,648 bytes
Relationship Storage: 42156 rels × 48 bytes = 2,023,488 bytes
Schema Storage: ~1,376 bytes
Connection Information#
connections
Connection Information:
Active Connections: 1 (CLI)
REST API: Available on port 3000
gRPC API: Available on port 50051
WebSocket: Not implemented
Error Handling#
Query Errors#
gigabrain> INVALID QUERY
Error: Query parsing failed: Unexpected token 'INVALID'
File Errors#
./gigabrain-cli --file nonexistent.cypher
Error reading line 1: No such file or directory
File execution completed:
Total lines: 0
Commands executed: 0
Errors: 1
Connection Errors#
If the graph database is unavailable, the CLI will show appropriate error messages and suggestions for troubleshooting.
Configuration#
Environment Variables#
# Custom history file
export GIGABRAIN_HISTORY_FILE="~/.config/gigabrain/history"
# Default output format
export GIGABRAIN_OUTPUT_FORMAT="json"
# Disable timing by default
export GIGABRAIN_NO_TIMING="true"
Configuration File#
Create ~/.gigabrain/config.toml:
[cli]
prompt = "gb> "
history_file = "~/.gigabrain/history"
max_history = 2000
output_format = "table"
show_timing = true
enable_completion = true
[connection]
rest_endpoint = "http://localhost:3000"
grpc_endpoint = "http://localhost:50051"
timeout = 30
Integration Examples#
Shell Scripting#
#!/bin/bash
# backup_script.sh
DATE=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="backup_${DATE}.json"
# Create backup
./gigabrain-cli --execute "backup ${BACKUP_FILE}" --silent
if [ $? -eq 0 ]; then
echo "Backup created successfully: ${BACKUP_FILE}"
# Upload to cloud storage
aws s3 cp "${BACKUP_FILE}" "s3://my-backups/gigabrain/"
else
echo "Backup failed!"
exit 1
fi
Batch Operations#
# Bulk data import
./gigabrain-cli --file data_import.cypher --format json > import_results.json
# Performance monitoring
./gigabrain-cli --execute "benchmark" --format csv > performance.csv
# Regular maintenance
./gigabrain-cli --execute "vacuum" --silent
./gigabrain-cli --execute "optimize" --silent
CI/CD Integration#
# .github/workflows/database_tests.yml
name: Database Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Build GigaBrain
run: cargo build --release
- name: Start Database
run: ./target/release/gigabrain &
- name: Run Test Queries
run: ./target/release/gigabrain-cli --file tests/integration.cypher
- name: Check Performance
run: ./target/release/gigabrain-cli benchmark
Troubleshooting#
Common Issues#
Command Not Found#
# Make sure the binary is built
cargo build --release --bin gigabrain-cli
# Check binary location
ls -la target/release/gigabrain-cli
# Make executable
chmod +x target/release/gigabrain-cli
History Not Persisting#
# Check permissions
ls -la .gigabrain_history
# Specify custom location
./gigabrain-cli --history-file ~/.config/gigabrain/history
Query Timeout#
# For long-running queries, check server logs
# Increase timeout in server configuration
# Break down complex queries into smaller operations
Memory Issues#
# Check memory usage
./gigabrain-cli memory
# Run vacuum to free memory
./gigabrain-cli --execute "vacuum"
# Optimize graph structure
./gigabrain-cli --execute "optimize"
Debug Mode#
# Enable debug logging
RUST_LOG=debug ./gigabrain-cli
# Trace specific operations
RUST_LOG=gigabrain::cli=trace ./gigabrain-cli
Best Practices#
Query Writing#
- Use EXPLAIN to understand query execution plans
- Index frequently queried properties
- Limit result sets with LIMIT clause
- Use parameters for repeated queries
Data Management#
- Regular backups before major changes
- Use transactions for related operations
- Monitor memory usage and optimize regularly
- Validate data integrity after imports
Performance Optimization#
- Create indexes on commonly queried properties
- Use PROFILE to identify slow queries
- Regular vacuum and optimize operations
- Monitor query timing and optimize as needed
Security#
- Validate input in batch files
- Use secure storage for backup files
- Limit access to history files
- Regular security audits of queries
This comprehensive CLI interface makes GigaBrain accessible for both interactive exploration and automated operations, providing a powerful tool for graph database management and analysis.