Skip to content
Blazej Mrozinski

Work

What I Do

I turn behavioral science into working systems. Twenty years in academic research gave me psychometrics, experimental methodology, and a deep skepticism of anything that hasn’t been measured properly. The startup side taught me how to ship — and that rigor without delivery is just a hobby.

The work spans psychometric assessment design for Gyfted, EPAM, gr8.tech, and HSE Ireland. Programmatic SEO and growth systems at Digital Savages and Prawomat. Workforce development architecture at Nerds.family. Statistical consulting from UCL to esportsLABgg. Four areas, usually simultaneous, drawing on the same core: measurement, systems thinking, and the discipline to validate before you scale.

Psychometric Assessment & Matching Systems

Most assessments in the wild don’t measure what they claim to. I design instruments that do — career interest quizzes, competency batteries, personality assessments, values inventories, cognitive ability tests — using Item Response Theory, Classical Test Theory, and Computerized Adaptive Testing. Then I build the scoring and matching systems that make the results actionable. The work covers both ends: selection (hiring, matching, career guidance) and development (L&D, workforce planning, skill progression).

At Gyfted, this means a patent-pending AI matching platform used by organizations like LSEG and EPAM. For EPAM specifically, I designed custom assessment instruments aligned to their internal competency model and L&D framework. This wasn’t an off-the-shelf deployment — it was a tailored psychometric layer connecting validated measurement to practical talent development decisions.

For gr8.tech, I built a custom values assessment tied to the company’s operating philosophy and hiring logic. The design used forced-choice, rank-type, and situational judgment formats with structured feedback, translating organizationally defined values into a psychometric architecture that could actually discriminate between candidates.

For HSE Ireland, I designed a career interest instrument from first principles for a specific population: Irish school leavers exploring healthcare careers. That meant building the conceptual model (9 career clusters mapped to healthcare role families), the item pool, the scoring architecture, and the classification logic that routes results into actionable career paths. The psychometric validation — IRT, CFA, reliability estimation — ensures the instrument measures what it claims to. The end user takes a 10-minute quiz. Behind it is months of design work they never see.

At Nerds.family, the Academy 360 model is a measurement-driven talent pipeline where learning paths, mentor evaluation, and progression logic are designed to make skill acquisition visible and legible to employers. It’s workforce development built on the same psychometric principles as assessment: if you can’t measure whether someone actually learned something, the program is guesswork.

More recently, my matching work has moved beyond classical psychometrics into AI-assisted systems for collaboration and opportunity discovery, where values, goals, communication style, and intent are progressively captured across different layers of interaction. The instruments are different, but the design problem is the same: measure something real, make it actionable.

Growth Engineering & SEO

I build systems that scale organic traffic through code, not through manual effort. My version of SEO sits where information architecture, programmatic content generation, UX, and conversion logic meet. It’s product and system design, not content operations.

Prawomat is the clearest example: 1,200+ templated pages generated from a single pattern, each targeting a specific long-tail keyword. Product-integrated SEO where the content architecture, the document generator, and the growth engine are the same system. Hub-and-spoke structure with 12 category hubs, a blog layer for topical depth, and daily legal news for freshness signals. All built to scale without linear content effort.

At Gyfted, the same approach drove 1.2M organic users. At Digital Savages / SEO Savages, it’s what the fractional growth team delivers across talent assessment, legal tech, AI health, and AdTech startups.

AI/LLM optimization — visibility in ChatGPT, Claude, Gemini, and Google AI Overviews — means restructuring content into machine-legible entities, topic clusters, high-signal landing pages, and answerable domain knowledge. It’s not a channel add-on. It’s a structural redesign of how content gets found.

Product Consulting

I take fuzzy domains — psychometrics, legal logic, career guidance, educational workflows, health interpretation, matching — and turn them into structured products with scoring rules, decision flows, specifications, and validation criteria. The common thread is translating ambiguous expertise into implementable systems.

I own the problem framing, system logic, product specification, validation logic, and interpretation layer. Engineering teams handle implementation. That division works because I bring the domain knowledge and measurement rigor that determines whether the product actually does what it claims, while engineers bring the code that makes it run.

I’ve taken products from zero to production across psychometric assessment platforms (Gyfted), workforce development systems (Nerds.family), and legal tech platforms (Prawomat). Prawomat is also a product consulting proof point: taking a complex procedural domain (Polish administrative and civil law) and turning it into scalable, user-facing logic that generates valid legal documents from structured input.

Statistical Modeling & Research

Structural equation modeling, multilevel modeling, latent class analysis, and other methods for research that’s too complex for standard tools. This isn’t separate from the product and assessment work — it’s part of what makes it better grounded. I can evaluate factor structure, detect weak measurement, validate constructs, and decide when the data doesn’t support the story. That matters every time I design an instrument or review whether a product’s metrics mean what someone claims they mean.

The applied side includes psychometric validation, longitudinal modeling, online behavioral measurement, and methodological work on analyzing messy real-world data — the kind that doesn’t arrive clean or balanced.

I’ve been teaching statistics and research methods at SWPS University since 2006 — the courses where students learn that their intuitions need evidence behind them. I’ve consulted on research for organizations ranging from University College London to esportsLABgg, and I’ve contributed to studies funded by NCN (Maestro, OPUS) and NCBR grants. My own research spans cognitive accessibility, self-concept, and intergroup behavior, published in journals including Journal of Personality and Social Psychology and Self and Identity.

What Ties the Work Together

Regardless of the domain, the work sits at the intersection of behavioral measurement, product logic, and scalable systems. I take something that people understand intuitively but can’t specify precisely — a competency, a career interest, a legal procedure, a growth opportunity — and build the structure that makes it measurable, implementable, and repeatable. The skill set is the same whether the output is an IRT-based assessment, a programmatic SEO architecture, or a product spec for a workforce development platform.

Problems I’m Most Useful For

The pattern across these projects: an organization has expertise but no system. Data but no decision logic. An assessment idea but no validated instrument. A growth opportunity that requires architecture and code, not more content. A domain complex enough that the product spec is the hard part, not the engineering.

If any of that maps to a problem you’re working on, I’m happy to talk. I’ll tell you honestly whether I can help.

Companies