You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I rewrote it as pytest with 50 maps instead of 270 (enough to hit the 32/64 layer thresholds) and added actual assertions. Here's the gist: Fixtures create isolated projects so nothing leaks between tests:
Basic tests verify the module actually produces correct output:
def test_single_strds_observation(self, simple_observation_session):
data = simple_observation_session
gs.run_command("t.vect.observe.strds", input=data.vector_name,
strds=data.strds_name, output="basic_stvds",
vector_output="basic_vec", columns="obs",
env=data.env, overwrite=True)
info = gs.parse_command("t.info", type="stvds",
input="basic_stvds", flags="g", env=data.env)
assert int(info["number_of_maps"]) == data.n_maps
def test_observation_values_present(self, simple_observation_session):
data = simple_observation_session
# ... run t.vect.observe.strds ...
rows = gs.read_command("v.db.select", map="vals_vec", layer=1,
columns="sampled", flags="c", env=data.env)
values = [float(v) for v in rows.strip().splitlines() if v.strip()]
assert all(v == pytest.approx(100.0) for v in values)
Error handling — stuff the shell tests never checked:
def test_missing_input_vector_fails(self, simple_observation_session):
ret = gs.run_command("t.vect.observe.strds", input="nonexistent_vector",
strds=data.strds_name, output="err_stvds",
vector_output="err_vec", columns="x",
env=data.env, errors="status")
assert ret != 0
def test_column_count_mismatch_fails(self, simple_observation_session):
# Two columns but one STRDS — should fail
ret = gs.run_command("t.vect.observe.strds", input=data.vector_name,
strds=data.strds_name, output="err_stvds",
vector_output="err_vec", columns="a,b",
env=data.env, errors="status")
assert ret != 0
Layer-limit tests (the re-enabled part, marked @pytest.mark.slow):
@pytest.mark.slow
class TestLayerLimits:
def test_many_layers_observation_succeeds(self, large_observation_session):
data = large_observation_session
gs.run_command("t.vect.observe.strds", input=data.vector_name,
strds=data.strds_name, output="large_stvds",
vector_output="large_vec", columns="lval",
env=data.env, overwrite=True)
info = gs.parse_command("t.info", type="stvds",
input="large_stvds", flags="g", env=data.env)
assert int(info["number_of_maps"]) == data.n_maps # 50
def test_many_layers_data_values(self, large_observation_session):
for layer_num in (1, 10, 25, data.n_maps):
rows = gs.read_command("v.db.select", map="large_vec",
layer=layer_num, columns="lval",
flags="c", env=data.env)
values = [v.strip() for v in rows.strip().splitlines() if v.strip()]
numeric = [v for v in values if v not in {"", "*"}]
assert len(numeric) > 0, f"Layer {layer_num}: all values are NULL"
All 9 tests pass (~3.5 min total):
Linting clean:
Questions
Before I open a PR
I wanted to check with you all first is this change needed right or no?
Does it make sense to add tests/ (pytest) alongside the existing testsuite/ (shell) for this module?
Is 50 maps a reasonable number or would you prefer something different?
I noticed other disabled tests in the temporal suite worth auditing those systematically?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey maintainers,
I came across
testsuite/test.t.vect.observe.strds.layer_bug.shand noticed it's been disabled — line 7 is justexit:The test created 270 maps to check layer limits but never actually asserted
anything. It just ran commands and moved on:
I rewrote it as pytest with 50 maps instead of 270 (enough to hit the 32/64 layer thresholds) and added actual assertions. Here's the gist:
Fixtures create isolated projects so nothing leaks between tests:
Basic tests verify the module actually produces correct output:
Error handling — stuff the shell tests never checked:
Layer-limit tests (the re-enabled part, marked
@pytest.mark.slow):All 9 tests pass (~3.5 min total):
Linting clean:
Questions
Before I open a PR
I wanted to check with you all first is this change needed right or no?
tests/(pytest) alongside the existingtestsuite/(shell) for this module?Beta Was this translation helpful? Give feedback.
All reactions