Testing in Incan¶
Incan provides a pytest-like testing experience with incan test.
For the testing API (assertions, markers, fixtures, parametrization), see: Language → Reference → Testing.
For a guided walkthrough, see: The Incan Book → Unit tests.
Quick Start¶
No-install fallback
If you did not run make install, you can still run the incan binary directly:
- from the repository root:
./target/release/incan ...
- or via an absolute path (from anywhere):
/absolute/path/to/incan run path/to/file.incn
If something fails
If you run into errors, see Troubleshooting. If it still looks like a bug, please file an issue on GitHub.
Create a test file (must be named test_*.incn or *_test.incn):
"""Test file for math operations"""
from testing import assert_eq
def add(a: int, b: int) -> int:
return a + b
def test_addition() -> None:
result = add(2, 3)
assert_eq(result, 5)
def test_subtraction() -> None:
result = 10 - 3
assert_eq(result, 7)
Run tests:
incan test tests/
Test Discovery¶
Tests are discovered automatically:
- Files: Named
test_*.incnor*_test.incn - Functions: Named
def test_*()
my_project/
├── src/
│ └── main.incn
└── tests/
├── test_math.incn # ✓ discovered
├── test_strings.incn # ✓ discovered
└── helpers.incn # ✗ not a test file
Assertions¶
Use assertion functions from the testing module (not Python-style assert expr):
from testing import assert, assert_eq, assert_ne, assert_true, assert_false, fail
# Equality
assert_eq(actual, expected)
assert_ne(actual, other)
# Boolean
assert(condition)
assert_true(condition)
assert_false(condition)
# Explicit failure
fail("this test should not reach here")
Markers¶
@skip - Skip a test¶
@skip("not implemented yet")
def test_future_feature() -> None:
pass
Output: test_future_feature SKIPPED (not implemented yet)
@xfail - Expected failure¶
@xfail("known bug #123")
def test_known_issue() -> None:
assert_eq(buggy_function(), "fixed")
If test fails: XFAIL (expected)
If test passes: XPASS (unexpected - reported as failure)
@slow - Mark slow tests¶
@slow
def test_integration() -> None:
# Long-running test
pass
Slow tests are excluded by default. Include with --slow.
CLI Options¶
# Run all tests in directory
incan test tests/
# Run specific file
incan test tests/test_math.incn
# Filter by keyword
incan test -k "addition"
# Verbose output (show timing)
incan test -v
# Stop on first failure
incan test -x
# Include slow tests
incan test --slow
Output Format¶
=================== test session starts ===================
collected 4 item(s)
test_math.incn::test_addition PASSED
test_math.incn::test_subtraction PASSED
test_math.incn::test_division FAILED
test_math.incn::test_future SKIPPED (not implemented)
=================== FAILURES ===================
___________ test_division ___________
assertion failed: `assert_eq(10 / 3, 3)`
left: 3.333...
right: 3
tests/test_math.incn::test_division
=================== 2 passed, 1 failed, 1 skipped in 0.05s ===================
Exit Codes¶
| Code | Meaning |
|---|---|
| 0 | All tests passed |
| 1 | One or more tests failed |
CI Integration¶
# GitHub Actions
- name: Run tests
run: incan test tests/
Fixtures¶
Fixtures provide setup/teardown and dependency injection for tests.
Basic Fixture¶
from testing import fixture
@fixture
def database() -> Database:
"""Provides a test database."""
db = Database.connect("test.db")
yield db # Test runs here
db.close() # Teardown (always runs, even on failure)
def test_insert(database: Database) -> None:
database.insert("key", "value")
assert_eq(database.get("key"), "value")
Fixture Scopes¶
Control when fixtures are created/destroyed:
@fixture(scope="function") # Default: new per test
def temp_file() -> str:
...
@fixture(scope="module") # Shared across file
def shared_client() -> Client:
...
@fixture(scope="session") # Shared across entire run
def global_config() -> Config:
...
Fixture Dependencies¶
Fixtures can depend on other fixtures:
@fixture
def config() -> Config:
return Config.load("test.toml")
@fixture
def database(config: Config) -> Database:
# config fixture is automatically injected
return Database.connect(config.db_url)
def test_query(database: Database) -> None:
result = database.query("SELECT 1")
assert_eq(result, 1)
Autouse Fixtures¶
Auto-apply fixtures to all tests in scope:
@fixture(autouse=true)
def setup_logging() -> None:
"""Automatically applied to all tests in this file."""
logging.set_level("DEBUG")
yield
logging.set_level("INFO")
Parametrize¶
Run a test with multiple parameter sets:
from testing import parametrize
@parametrize("a, b, expected", [
(1, 2, 3),
(0, 0, 0),
(-1, 1, 0),
(100, 200, 300),
])
def test_add(a: int, b: int, expected: int) -> None:
assert_eq(add(a, b), expected)
Output:
test_math.incn::test_add[1-2-3] PASSED
test_math.incn::test_add[0-0-0] PASSED
test_math.incn::test_add[-1-1-0] PASSED
test_math.incn::test_add[100-200-300] PASSED
Named Test IDs¶
@parametrize("input, expected", [
("hello", "HELLO"),
("World", "WORLD"),
("", ""),
], ids=["lowercase", "mixed", "empty"])
def test_upper(input: str, expected: str) -> None:
assert_eq(input.upper(), expected)
Combining Fixtures and Parametrize¶
@fixture
def database() -> Database:
db = Database.connect("test.db")
yield db
db.close()
@parametrize("key, value", [
("name", "Alice"),
("age", "30"),
])
def test_insert(database: Database, key: str, value: str) -> None:
database.insert(key, value)
assert_eq(database.get(key), value)
Async Tests (Coming Soon)¶
Support for async test functions and fixtures with Tokio:
from testing import fixture
@fixture
async def http_server() -> ServerHandle:
server = await start_server(port=0)
yield server
await server.shutdown()
async def test_endpoint(http_server: ServerHandle) -> None:
response = await fetch(f"http://localhost:{http_server.port}/health")
assert_eq(response.status, 200)
Best Practices¶
- One assertion per test - Makes failures easier to diagnose
- Descriptive test names -
test_user_creation_with_invalid_email_fails - Keep tests fast - Mark slow tests with
@slow - Use xfail for known bugs - Track them without blocking CI