Langformers Documentation

Langformers is a powerful yet user-friendly Python library designed for seamless interaction with large language models (LLMs) and masked language models (MLMs). It unifies the following core NLP pipelines into a single, cohesive API:

Langformers is built on top of popular libraries such as Pytorch[1], Transformers[2], Ollama[3], FastAPI[4], ensuring compatibility with modern NLP workflows. The library supports Hugging Face and Ollama models, and is optimized for both CUDA and Apple Silicon (MPS).

Installing

You can install Langformers using pip:

pip install -U langformers

Requires Python 3.10+. For more details, check out the installation guide.

What makes Langformers special?

Whether you’re generating text, training classifiers, labelling data, embedding sentences, or building a semantic search index… the API stays consistent:

from langformers import tasks

component = tasks.create_<something>(...)
component.<do_something>()

No need to juggle different frameworks — Langformers brings Hugging Face Transformers, Ollama, FAISS, ChromaDB, Pinecone, and more under one unified interface.

Tasks in Langformers

Langformers delivers a smooth and unified experience for researchers and developers alike, supporting a broad set of essential NLP tasks right out of the box.

Below are the pre-built NLP tasks available:

Langformers Tasks

Citing

If you find Langformers useful in your research or projects, feel free to cite the following publication:

@article{lamsal2025langformers,
   title={Langformers: Unified NLP Pipelines for Language Models},
   author={Rabindra Lamsal and Maria Rodriguez Read and Shanika Karunasekera},
   year={2025},
   journal={arXiv preprint arXiv:2504.09170},
   url={https://arxiv.org/abs/2504.09170}
}

Footnotes