What is the difference between fine-tuning and search in training a model for question answering?
— Fine-tuning is a transfer learning method that teaches a model a new task, while semantic search uses semantic embeddings to search based on content and context.
Why is fine-tuning not suitable for QA?
— Fine-tuning only unfreezes a small portion of the model and does not retrain the entire model, making it unsuitable for QA.
Why is semantic search a better option than fine-tuning?
— Semantic search is fast, easy, and cheap, allowing for scalable searches in large databases, making it a better option than fine-tuning.
Is fine-tuning reliable for acquiring new knowledge?
— Fine-tuning is not reliable for acquiring new knowledge as it does not have a theory of knowledge or mind, making it an unreliable knowledge door.
How does one approach QA using semantic search?
— To do QA using semantic search, start with a question, forage for information, compile a relevant corpus, extract salient bits, and produce an answer using a Dewey Decimal System for semantic search.
We’ve got the additional info