The Science of Bad Science

The Science of Bad Science

SHARKMOUSE FARMS

The Science of Bad Science

How Flawed Cannabis Research Misleads the Industry and Hurts the People It’s Meant to Help

by Justin Michaelov on August 13 2025

The Science of Bad Science is about stripping away academic gloss and examining the structural integrity of research. A study’s conclusions are only as reliable as the data and methods behind them. Titles, credentials, and journal logos can’t transform flawed design into valid science. When measurements are unverified, statistical methods are inadequate, or procedures don’t align with the research question, the conclusions aremerely marketing. This is about validating the work instead of the résumé, and showing how to dissect a study so the claims stand or fall on their evidence and not the letters after someone’s name.

The case study for this review, Impact of Water Activity on the Chemical Composition and Smoking Quality of Cannabis Flower: The Science of Smokability Phase I Results (CST0325), presents itself as a polished, authoritative piece of work authored by credentialed scientists.but once we dig in, the paper contains clear methodological gaps, analytical weaknesses, and interpretive overreach - all of which documented in the 13 validated flaws identified in this review. These include contradictions between narrative and data, unverified measurements, insufficient replication, uncontrolled testing environments, and missing validation steps.. This is how bad science hides in plain sight, and why blind trust in the credentials is a liability.

Why Methodology Matters More Than Credentials

A sound study is transparent, repeatable, and proportional in its claims. Every measurement is traceable to a validated method, instruments are calibrated, controls are applied to all potential confounding variables, and raw data is reported in a way that allows the math to be reconstructed independently. The conclusions have to align with the strength of the evidence, and limitations have to be explicitly acknowledged. By stark contrast, CST0325 leaves a lot of uncertainty in its methodology:

• The paper omits essential details likesample origin standardization, environmental control parameters, and full replication counts.

• Analytical methods are missing instrument details, calibration procedures, and recovery validation.

• Statistical outputs are presented without measures of dispersion or multiple-comparison corrections.

• Certain conclusions, like the claimed “optimal” 0.65 aW aren’t supported by the study’s own data tables.

Reader checklist you can apply anywhere:

Can I reconstruct the method step-by-step?

Are instruments named, calibrated, and validated for recovery?

Are environmental conditions fixed and recorded?

Do replicates and variance appear with the statistics?

Do units and totals close under mass balance?

Does the conclusion scale with the evidence, and are limitations stated?

Early Red Flags in CST0325

Many of the weaknesses in CST0325 are apparent before reaching the results:

• Missing sensory data for the 0.85 aW group, limiting interpretation to a partial dataset.

• Unit confusion in Table I that obscures mass-balance analysis.

• Use of tobacco smoking-machine parameters despite acknowledged differences in cannabis combustion.

• Water activity control via wet paper towels in jars, without direct measurement or microbial safety data.

• Financial extrapolation based on literature moisture curves, without cultivar-specific sorption data.

Each of these issues corresponds to one or more of the 13 validated flaws in this review and directly undermines the paper’s reproducibility and applicability.

The Operational Risk of Adopting These Findings

If implemented as post-harvest best practice, CST0325’s conclusions could introduce measurable risks to commercial operations:

• Product quality degradation due to reliance on unverified environmental control methods.

• Regulatory exposure from misinterpretation of aW as direct %MC without microbial safety consideration.

• Economic inefficiency from applying flawed weight-loss and revenue calculations.

• Consumer experience misrepresentation from sensory data influenced by uncontrolled, biased conditions.

These are not theoretical concerns - they follow directly from the methodological gaps and analytical weaknesses documented in this review.

Framing the Review as a Model for Critical Evaluation

This post-publication peer review applies a structured critique to CST0325, matching the conventional academic review format. By systematically testing claims against evidence, identifying missing data, and flagging bias, this review serves to both as an assessment of one study and as a framework any cultivator, processor, or industry stakeholder can apply to future literature. The objective is not to discredit authors personally, but to hold research to the standards required for it to meaningfully inform industry practice.

A Quick Field Guide: How to Spot Bad Science (read this before the case study)

Methods first, not conclusions. If you can’t reconstruct what was done, you can’t trust what’s claimed.

Controls and calibration. No instrument calibration, no recovery validation, no microbial controls = no confidence.

Replicates and dispersion. Counts, variance, and multiple-comparison corrections must be visible.

Environment. If test conditions aren’t locked (temperature, RH, air movement, devices), the results aren’t either.

Mass balance & units. If the math can’t close, the story can’t either.

Claim strength matches evidence. Big claims with thin data = marketing, not science.

Open the PDF in a new tab

Taken as a whole, this is far from a definitive scientific paper. It’s a commercially styled feature that borrows the trappings of research without the rigor. The layout and narrative-first presentation replace structured methods; essential statistics and procedures are missing. There’s no sample-origin standardization, no documented replication plan, no instrument make/model, calibration, or spike and recovery validation; environmental conditions like temp/rh/airflow aren’t locked; units conflict and mass balance can’t close. The sensory work is unblinded, unrubriced, and unanalyzed (theres no variance, power, or multiple-comparison control). The water activity is “controlled” with jar hacks instead of verified setpoints, with zero microbial safety validation. A tobacco smoking machine is used despite cannabis and tobacco having completely different combustion dynamics. Key data is absent (e.g., 0.85 aW sensory), while the marquee claim of 0.65 aW as being “optimal” isn’t supported by the tables. Financial projections lean on literature curves without cultivar or even speciesr-specific sorption data and multiple authors have direct commercial ties but make no formal conflict-of-interest declaration. The whole piece reads like industry marketing and not a reproducible experiment

What this Means for Science

The authors of this paper aren't villains, but their work isn’t above reproach either. Their credentials may earn them a seat at the table, but credentials don’t guarantee rigor, accuracy, or relevance. In this case, the methods were loose, the controls insufficient, and the conclusions were clearly overstated but the publication still slipped through into print. That’s a failure on two fronts: theirs, for not holding their own work to a higher standard, and the publishers, for doing exactly what we discussed here today, blindly trusting the authors credentials. In a field as young and volatile as cannabis science, this complacency is more than a small mistake. This kind of science is a liability that seeds misinformation into professional discourse where it's amplified and solidified as “best practice” before anyone double checks the math. If these authors want their findings to shape an industry, THEY have an obligation to produce work that can stand up to the kind of scrutiny we’ve just applied on it. Using that metric, this paper simply doesn’t make the cut.

Your job as a reader, whether you’re a cultivator, a DOC, or a hobbyist trying to produce the best work possible, is to separate the strength of the data from the shine of the credentials that produced it. This means questioning methods, looking for missing controls, spotting overreaches in conclusions, and asking if the work would stand if we stripped it of its PhDs and commercial affiliations.

Us meager  uneducated peasants have the ability to hold  their accountability through diligence.

Be a good citizen scientist by always reading beyond the headline and checking that the work you're adopting as fact is supported by the the data behind those claims.

Make sure you follow me on Instagram @sharkmousethesecond

on x @sharkmousefarm

and FB @sharkmousefarms and be sure to follow for more critical breakdowns in the series The Science of Bad Science

Coming next week

Our critical Breakdown on the study “”

by Justin Michaelov

Sharkmouse farms.

Canada

2025

Back to blog