Published inTDS ArchiveSparse AutoEncoder: from Superposition to interpretable featuresDisentangle features in complex Neural Network with superpositionsFeb 1Feb 1
Published inTDS ArchiveSuperposition: What Makes it Difficult to Explain Neural NetworkWhen there are more features than model dimensionsDec 29, 20243Dec 29, 20243
Published inTDS ArchiveUnder-trained and Unused tokens in Large Language ModelsExistence of under-trained and unused tokens and Identification Techniques using GPT-2 Small as an ExampleOct 1, 2024Oct 1, 2024
Published inTDS ArchiveKernelSHAP can be misleading with correlated predictorsA concrete case studyAug 9, 2024Aug 9, 2024
Published inTDS ArchiveCreating an Assistant with OpenAI Assistant API and StreamlitA step-by-step guideJun 18, 20241Jun 18, 20241
Published inTDS ArchiveCan Neural Networks Formulate Shock Wave?How we build a PINN for inviscid Burgers Equation with shock formulationApr 4, 2024Apr 4, 2024
Published inTDS ArchiveHow to Interpret GPT2-SmallMechanistic Interpretability on prediction of repeated tokensMar 22, 2024Mar 22, 2024
Published inTowards AIHow Can AI Help Visually Impaired People See The WorldAn attempt to craft an App empowered by LLM to support the visually impairedDec 28, 2023Dec 28, 2023
Published inTDS ArchiveEnhancing CSV File Query Performance in ChatGPTwith LangChain’s Self-Querying based on a customized CSV LoaderJul 31, 20233Jul 31, 20233
Published inTDS ArchiveDemystifying Bayesian Models: Unveiling Explanability through SHAP ValuesExploring PyMC’s Insights with SHAP Framework via an Engaging Toy ExampleMay 12, 2023May 12, 2023