Large language models (LLMs) have fundamentally transformed how machines process and generate natural language, enabling unprecedented capabilities in text understanding, reasoning, and generation. However, deploying these models in specialized, high-stakes domains (such as social media, healthcare, legal, and regulatory contexts) introduces critical challenges around domain adaptation, data privacy, and decision explainability, particularly for low-resource languages like Turkish.
Our research focuses on developing domain-adapted, privacy-preserving NLP systems that can operate reliably in sensitive, real-world environments. We combine neural and symbolic approaches to build hybrid architectures where LLMs are augmented with knowledge graphs and deterministic rule engines, enabling transparent and auditable AI decisions. Our work spans efficient fine-tuning of large models for specialized domains, automatic de-identification of sensitive texts, and social media language understanding, including hate speech and toxic content detection across multiple languages.