Testing Guide
This document describes how to run tests for the SchemaX project.
Overview
SchemaX uses different testing frameworks for different components:
- Release checklist: Before a release, use the Release checklist section below to confirm coverage aligns with this guide.
- Python SDK: pytest with coverage reporting (306 tests)
- VS Code Extension: Jest with React Testing Library
- Integration: End-to-end workflow tests
Python SDK Testing
Prerequisites
cd packages/python-sdk
pip install -e ".[dev]"
# Or use UV (faster):
# uv pip install -e ".[dev]"
Running Tests
# Run all tests (fast)
pytest tests/ -q
# Run with verbose output
pytest tests/ -v
# Run specific test file
pytest tests/unit/test_sql_generator.py -v
# Run specific test
pytest tests/unit/test_sql_generator.py::TestCatalogSQL::test_add_catalog -xvs
# Run with coverage
pytest tests/ --cov=src/schemax --cov-report=term-missing
# Run with coverage HTML report
pytest tests/ --cov=src/schemax --cov-report=html
open htmlcov/index.html
Test Structure
packages/python-sdk/tests/
├── unit/ # Unit tests
│ ├── test_storage_v4.py # V4 storage layer tests
│ ├── test_sql_generator.py # SQL generation tests
│ ├── test_state_reducer.py # State reducer tests
│ ├── test_state_differ.py # State differ tests (including views)
│ ├── test_catalog_mapping.py # Catalog name mapping tests
│ ├── test_unity_executor.py # Databricks executor tests
│ ├── test_deployment_tracker.py # Deployment tracking tests
│ ├── test_dependency_graph.py # Dependency graph & cycle detection tests
│ └── test_apply_command.py # Apply command tests (interactive/non-interactive)
├── integration/ # Integration tests
│ ├── test_workflows.py # End-to-end workflow tests (state, snapshot, diff)
│ ├── test_workflows_e2e.py # E2E tests for all situations in Workflows doc
│ ├── test_command_workflows.py # CLI command workflow tests (validate, sql, snapshot, diff)
│ └── test_view_dependencies.py # View dependency and FQN qualification tests
├── providers/ # Provider-specific tests
│ └── base/
│ └── test_hierarchy.py # Hierarchy tests
├── fixtures/ # Test fixtures
└── utils/ # Test utilities
└── operation_builders.py # Operation builder helpers
Current Status
- ✅ Python SDK: 306 tests passing (12 skipped)
- ✅ VS Code Extension: 25 Jest tests passing
- ✅ SQLGlot validation integrated for SQL syntax checking
- ✅ Dependency graph tests with cycle detection and topological sorting
- ✅ View support tests including FQN qualification regression tests
- ✅ State differ generic tests to catch missing object type implementations
- ✅ Code coverage reporting enabled for both Python and TypeScript
- ✅ Non-interactive mode tests for CI/CD compatibility
Apply Command Tests
The test_apply_command.py test suite ensures the schemax apply command works correctly in both interactive and non-interactive modes. This is critical for CI/CD pipelines where user prompts would cause the command to hang.
Tests included:
- ✅
test_noninteractive_mode_auto_creates_snapshot- Verifies snapshot auto-creation without prompting - ✅
test_interactive_mode_prompts_for_snapshot- Verifies user prompt with 3 choices (create/continue/abort) - ✅
test_interactive_mode_create_snapshot- Verifies snapshot creation when user chooses "create" - ✅
test_sql_preview_noninteractive_skips_prompt- Verifies SQL preview doesn't prompt in CI/CD mode - ✅
test_workspace_without_uncommitted_ops- Verifies no prompts when changelog is empty
Why these tests matter:
- Prevents regressions that would break CI/CD pipelines
- Ensures
--no-interactionflag is respected throughout the command - Validates that interactive mode provides proper user choices
- Guarantees the command never hangs waiting for input in automated environments
Running apply command tests:
# Run all apply command tests
pytest tests/unit/test_apply_command.py -v
# Run specific test
pytest tests/unit/test_apply_command.py::TestApplyCommand::test_noninteractive_mode_auto_creates_snapshot -xvs
View Dependency Tests
The view test suites ensure dependency-aware SQL generation works correctly, including FQN qualification and batching optimizations.
Integration Tests (test_view_dependencies.py):
- ✅
test_unqualified_table_refs_are_qualified_with_backticks- REGRESSION TEST for FQN qualification - ✅
test_view_with_multiple_tables_all_qualified- Validates JOINs are properly qualified - ✅
test_create_and_update_view_batched- Verifies CREATE + UPDATE squashing - ✅
test_create_and_multiple_updates_batched- Ensures last UPDATE wins - ✅
test_view_dependencies_are_sorted_correctly- Validates topological sorting - ✅
test_circular_view_dependency_detection- Detects cycles and falls back gracefully
State Differ Tests (test_state_differ.py):
- ✅
test_diff_added_view- Detects newly added views - ✅
test_diff_removed_view- Detects removed views - ✅
test_diff_renamed_view_with_history- Detects view renames - ✅
test_diff_view_definition_change- Detects SQL definition changes - ✅
test_diff_view_comment_change- Detects comment changes - ✅
test_diff_multiple_views_added- REGRESSION TEST for bulk view additions - ✅
test_new_schema_with_views_includes_all_views- REGRESSION TEST for views in new schemas - ✅
test_new_catalog_with_views_includes_all_views- REGRESSION TEST for views in new catalogs - ✅ Generic object coverage tests to catch future bugs with any object type
Dependency Graph Tests (test_dependency_graph.py):
- ✅ Cycle detection with detailed paths
- ✅ Topological sorting
- ✅ Breaking change detection
Why these tests matter:
- Catches FQN qualification bugs that cause
TABLE_OR_VIEW_NOT_FOUNDerrors - Validates view SQL is generated in correct order (tables before views)
- Ensures state differ includes all views when diffing snapshots
- Generic tests prevent similar bugs for future object types
Running view tests:
# Run all view dependency tests
pytest tests/integration/test_view_dependencies.py -v
# Run specific regression test
pytest tests/integration/test_view_dependencies.py::TestViewFQNQualification::test_unqualified_table_refs_are_qualified_with_backticks -xvs
# Run state differ view tests
pytest tests/unit/test_state_differ.py::TestUnityStateDifferViews -v
# Run state differ regression tests
pytest tests/unit/test_state_differ.py::TestUnityStateDifferViewRegression -v
# Run dependency graph tests
pytest tests/unit/test_dependency_graph.py -v
Workflow E2E tests (WORKFLOWS.md)
The file tests/integration/test_workflows_e2e.py implements end-to-end and integration tests for all five situations described in Workflows:
| Situation | Test class | What is tested |
|---|---|---|
| 1. Greenfield single dev | TestWorkflowS1GreenfieldSingleDev | Init → ops → snapshot → validate → sql → diff; apply dry-run (stubbed) |
| 2. Greenfield multi-dev | TestWorkflowS2GreenfieldMultiDev | Validate after ops; snapshot validate (no stale); snapshot rebase |
| 3. Brownfield | TestWorkflowS3Brownfield | Import with adopt-baseline (stores baseline); rollback before baseline blocked; rollback with --force |
| 4. Apply failure and rollback | TestWorkflowS4ApplyFailureAndRollback | Apply dry-run then complete rollback dry-run; partial rollback CLI (stubbed DB) |
| 5. Diff, validate, SQL-only | TestWorkflowS5DiffValidateSqlOnly | validate, sql, diff via CLI and API (no live DB) |
Run all workflow e2e tests:
pytest tests/integration/test_workflows_e2e.py -v
VS Code Extension Testing
Prerequisites
cd packages/vscode-extension
npm install
Running Tests
# Run all tests
npm test
# Run tests in watch mode
npm run test:watch
# Run tests with coverage
npm run test:coverage
# Run specific test file
npm test -- storage-v4.test.ts
# Run tests with verbose output
npm test -- --verbose
Test Structure
packages/vscode-extension/tests/
├── unit/ # Unit tests
│ └── storage-v4.test.ts # Storage layer tests (5 tests)
├── webview/ # React component tests
│ └── components/
│ ├── ColumnGrid.test.tsx # Column grid tests (7 tests)
│ ├── Sidebar.test.tsx # Sidebar tests (2 tests)
│ └── TableConstraints.test.tsx # Constraints tests (7 tests)
├── providers/ # Provider tests
│ └── base/
│ └── hierarchy.test.ts # Hierarchy tests (4 tests)
├── fixtures/ # Test fixtures
│ ├── sample-project-v4.json # V4 project fixture
│ ├── sample-changelog.json # Changelog fixture
│ ├── sample-snapshot.json # Snapshot fixture
│ └── sample-state.ts # State fixtures
├── __mocks__/ # Mocks
│ ├── vscode.ts # VS Code API mock
│ └── styleMock.js # CSS mock
├── setup.ts # Test setup
└── utils/ # Test utilities
Test Summary: 25 passing tests across 5 test files
Testing Patterns
Storage Layer Testing
import { describe, test, expect, jest, beforeEach } from '@jest/globals';
import * as fs from 'fs/promises';
import { ensureProjectFile, readProject } from '../../src/storage-v4';
// Mock fs/promises
jest.mock('fs/promises');
const mockFs = fs as jest.Mocked<typeof fs>;
describe('Storage V4', () => {
beforeEach(() => {
jest.clearAllMocks();
});
test('should create project file', async () => {
mockFs.writeFile.mockResolvedValue(undefined);
// Test implementation
});
});
React Component Testing
import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import { ColumnGrid } from '../../../src/webview/components/ColumnGrid';
// Mock Zustand store
jest.mock('../../../src/webview/state/useDesignerStore', () => ({
useDesignerStore: () => ({
addColumn: jest.fn(),
renameColumn: jest.fn(),
// ... other store methods
}),
}));
describe('ColumnGrid Component', () => {
test('should render columns', () => {
render(<ColumnGrid tableId="table_001" columns={mockColumns} />);
expect(screen.getByText('id')).toBeInTheDocument();
});
});
SQL Generator Testing
import { UnitySQLGenerator } from '../../src/providers/unity/sql-generator';
import { UnityState } from '../../src/providers/unity/models';
describe('Unity SQL Generator', () => {
test('should generate SQL for add_catalog', () => {
const state: UnityState = { catalogs: [] };
const generator = new UnitySQLGenerator(state);
const op = {
id: 'op_001',
op: 'add_catalog',
payload: { catalogId: 'cat_001', name: 'test_catalog' },
};
const result = generator.generateSQLForOperation(op);
expect(result.sql).toContain('CREATE CATALOG IF NOT EXISTS test_catalog');
});
});
Mocking VS Code API
The VS Code API is mocked in tests/__mocks__/vscode.ts:
export const window = {
showInformationMessage: jest.fn(),
showErrorMessage: jest.fn(),
createOutputChannel: jest.fn(() => ({
appendLine: jest.fn(),
show: jest.fn(),
})),
};
export const workspace = {
workspaceFolders: [],
getConfiguration: jest.fn(),
};
Coverage Thresholds
The project has baseline coverage thresholds configured in jest.config.js:
coverageThreshold: {
global: {
branches: 2,
functions: 2,
lines: 4,
statements: 4,
},
}
Current Coverage: ~4% overall (baseline established)
- React components: ~10-25% coverage
- Core providers: Initial test coverage
- Focus: Demonstrating testing infrastructure and patterns
Integration Tests
Running Integration Tests
# From project root
cd examples/basic-schema
# Validate schema
schemax validate
# Generate SQL
schemax sql --output test-migration.sql
CI/CD Integration
Integration tests run automatically in the integration-tests.yml workflow:
- Builds VS Code extension
- Installs Python SDK
- Tests CLI on example project
Smoke Tests
Quick validation to ensure everything builds correctly:
# From project root
./scripts/smoke-test.sh
The smoke test script:
- ✅ Builds VS Code extension
- ✅ Validates extension package
- ✅ Installs Python SDK
- ✅ Validates example schema with CLI
Quality Checks (Local)
Run all quality checks locally before committing:
# From project root
./devops/run-checks.sh
This script runs:
- ✅ Python code formatting check (Black, 100 char line length)
- ✅ Python linting (Ruff)
- ✅ Python SDK tests (pytest - 201 tests)
- ✅ VS Code Extension tests (Jest - 25 tests)
- ✅ Smoke tests (extension build, SDK install, CLI validation)
Exit code 0 = All checks passed, ready to commit
Exit code 1 = One or more checks failed, fix before committing
CI/CD Pipeline
The quality checks pipeline runs automatically on:
- Push to
mainordevelop - Pull requests to
mainordevelop - Manual trigger
Pipeline Jobs
- code-formatting: Black + Ruff checks
- python-tests: 138 pytest tests + coverage reporting
- extension-tests: Jest tests + coverage reporting
- smoke-tests: Extension build + SDK validation
- commit-signatures: GPG signature verification
Status
✅ All checks configured and passing locally
Test Development Guidelines
Writing Good Tests
-
Arrange-Act-Assert Pattern
test('should add catalog', () => {
// Arrange
const state = { catalogs: [] };
const op = { op: 'add_catalog', ... };
// Act
const newState = applyOpsToState(state, [op]);
// Assert
expect(newState.catalogs).toHaveLength(1);
}); -
Test Naming Convention
- Use descriptive names:
should create project file with v4 schema - Use action-oriented names:
should apply add_catalog operation
- Use descriptive names:
-
Test Isolation
- Each test should be independent
- Use
beforeEachto reset state - Mock external dependencies
-
Test Coverage
- Aim for 70%+ overall coverage
- 80%+ for critical files like
storage-v4.ts - Test edge cases and error conditions
Test Fixtures
Reusable test data in tests/fixtures/:
sample-project-v4.json: Valid v4 projectsample-changelog.json: Changelog with operationssample-snapshot.json: Snapshot with statesample-state.ts: TypeScript state objects
Using Operation Builders (Python)
The Python SDK includes operation builder utilities:
from tests.utils.operation_builders import OperationBuilder
op = OperationBuilder.add_catalog('cat_001', 'test_catalog')
Debugging Tests
Python Tests
# Run with print statements visible
pytest tests/unit/test_storage_v4.py -xvs
# Drop into debugger on failure
pytest tests/unit/test_storage_v4.py --pdb
# Run only failed tests from last run
pytest --lf
VS Code Extension Tests
# Run with verbose output
npm test -- --verbose
# Run specific test
npm test -- --testNamePattern="should create project file"
# Update snapshots
npm test -- --updateSnapshot
Continuous Testing
Watch Mode (Python)
cd packages/python-sdk
pytest-watch tests/
Watch Mode (TypeScript)
cd packages/vscode-extension
npm run test:watch
Troubleshooting
Common Issues
Issue: ModuleNotFoundError: No module named 'schemax'
Solution: Install SDK in editable mode: pip install -e ".[dev]"
Issue: Jest tests fail with "Cannot find module 'vscode'"
Solution: Ensure tests/__mocks__/vscode.ts exists and is properly configured
Issue: React component tests fail with "acquireVsCodeApi is not defined"
Solution: Check tests/setup.ts has the webview API mock
Issue: Coverage thresholds not met
Solution: Add tests for uncovered code or adjust thresholds in jest.config.js
Release checklist
Use before a release. Every area below should have passing tests or a CI/smoke step.
Python SDK: Storage V4, SQL generator (all op types), state reducer & differ, catalog mapping, apply command, view dependencies, dependency graph, executor & deployment tracker, integration workflows.
VS Code Extension: Storage V4, ColumnGrid/Sidebar/TableConstraints/hierarchy, fixtures & mocks.
Integration & Smoke: Smoke (build extension, validate package, install SDK, validate example); devops/run-checks.sh runs format, lint, pytest, Jest, smoke.
Docs: TESTING.md "Last Updated", test count, and status match current runs.
Resources
Last Updated: 2025-10-28
Test Count: 138 Python tests (12 skipped), 25 VS Code Extension tests
Status: ✅ All tests passing
Coverage: Python SDK 70%+, VS Code Extension ~4% (baseline)