Title: Bayesian Non-Parametric Test Using Independent Evaluation on Classifiers Author: Timo von Oertzen Abstract: With Structural Equation Models, the area of parameterized tests in methodology has been unified to a large extent, and the framework makes us happy because there exists cool software for SEMs, and it allows Bayesian inference just as easily as frequentists. But with non-parametric tests, we still have to teach NHST in the classroom although we know of its shortcomings. In this talk, we'd like to discuss the possibility to put at least all group-comparison non-parametric tests - e.g., U-Tests or chi^2-Tests - in an AI-framework by classifiers, and show how to estimate the group differences in a Bayesian fashion even in the absence of parameters (because we cheat, of course, using the classifier performance as a parameter). These tests will be shown to be at worst equally effective, at best better than classical tests, detect more group differences, and have the potential of unifying all tests that we can't word in SEM language, while kicking NHST out of its last big stronghold in empirical analysis.