699:-
Uppskattad leveranstid 5-10 arbetsdagar
Fri frakt för medlemmar vid köp för minst 249:-
Andra format:
- Pocket/Paperback 839:-
This book introduces empirical methods for machine learning with a special focus on applications in natural language processing (NLP) and data science. The authors present problems of validity, reliability, and significance and provide common solutions based on statistical methodology to solve them. The book focuses on model-based empirical methods where data annotations and model predictions are treated as training data for interpretable probabilistic models from the well-understood families of generalized additive models (GAMs) and linear mixed effects models (LMEMs). Based on the interpretable parameters of the trained GAMs or LMEMs, the book presents model-based statistical tests such as a validity test that allows for the detection of circular features that circumvent learning. Furthermore, the book discusses a reliability coefficient using variance decomposition based on random effect parameters of LMEMs. Lastly, a significance test based on the likelihood ratios of nested LMEMs trained on the performance scores of two machine learning models is shown to naturally allow the inclusion of variations in meta-parameter settings into hypothesis testing, and further facilitates a refined system comparison conditional on properties of input data. The book is self-contained with an appendix on the mathematical background of generalized additive models and linear mixed effects models as well as an accompanying webpage with the related R and Python code to replicate the presented experiments. The second edition also features a new hands-on chapter that illustrates how to use the included tools in practical applications.
- Format: Inbunden
- ISBN: 9783031570643
- Språk: Engelska
- Antal sidor: 168
- Utgivningsdatum: 2024-06-10
- Förlag: Springer International Publishing AG