My Open Source Contributions
XLSR – Cross-Lingual Sentence Representations
Models and training code for cross-lingual sentence representations like T-Systems-onsite/cross-en-de-roberta-sentence-transformer
Census-Income with LightGBM and Optuna
This project uses the census income data and fits LightGBM models on it. It is not intended to bring super good results, but rather as a demo to show the interaction between LightGBM, Optuna and HPOflow. The usage of HPOflow is optional and can be removed if wanted. We also calculare the feature importances with SHAP (SHapley Additive exPlanations).
The MLflow Docker image.
MLflow does not provide an official Docker image. This project fills that gap.
Python tool to support lazy imports
This model is intended to compute sentence (text) embeddings for English and German text. These embeddings can then be compared with cosine-similarity to find sentences with a similar semantic meaning.
- add classifier_dropout to classification heads: #12794
- add option for subword regularization in sentencepiece tokenizer: #11149, #11417
- add strip_accents to basic BertTokenizer: #6280
- refactor slow sentencepiece tokenizers and add tests: #11716, #11737
- more fixes and improvements
- add MLflow integration callback: #1028
- trial level suggest for same variable with different parameters give warning: #908
- more fixes and improvements
- add callback so we can do pruning and check for nan values: #327
- add option to pass params to tokenizer: #342
- always store best_score: #439
- fix for OOM problems on GPU with large datasets: #525
Other Fixes and Improvements
- google-research/electra: add toggle to turn off
- opensearch-project/opensearch-py: add Sphinx to generate Code Documentation #112 - also see API Reference
- deepset-ai/FARM: various fixes and improvements
- hyperopt/hyperopt: add progressbar with tqdm #455
- mlflow/mlflow: add possibility to use client cert. with tracking API #2843
The German colossal, cleaned Common Crawl corpus (GC4 corpus)
This is a German text corpus which is based on Common Crawl. The text corpus has the size of 454 GB packed. Unpacked it is more than 1 TB. It has been cleaned up and preprocessed and can be used for various tasks in the NLP field. The dataset is joined work with Philipp Reißel (ambeRoad).
Wikipedia 2 Corpus
Tools to extract and clean the Wikipedia texts to transform them into a text corpus for self-supervised NLP model training. Includes also a prepared corpus for English and German language.
NLU Evaluation Data - German and English + Similarity
This repository contains two datasets:
NLU-Data-Home-Domain-Annotated-All-de-en.csv: This dataset contains a labeled multi-domain (21 domains) German and English dataset with 25K user utterances for human-robot interaction.
NLU-Data-Home-Domain-similarity-de.csv: This dataset contains German sentence pairs from
NLU-Data-Home-Domain-Annotated-All-de-en.csvwith semantic similarity scores.