md. jalhok babu's profile

Where are the opportunities?

Where are the opportunities?
Businesses will need access to tools and expertise to help evaluate which model to use for which use case. Developers need HE Tuber to decide how best to evaluate whether a particular model is suitable for the "job to be done." The evaluation needs to consider multiple factors, including not only the performance of the model, but also cost, the level of control that can be implemented, etc.
2. Run and maintain models : Platforms will emerge to help companies train, fine-tune, and run models (especially the third type of long-tail models). Traditionally, these platforms have been widely referred to as ML Ops platforms, and we expect this definition to expand to generative AI as well. Platforms such as , Weights and Biases,  etc. are rapidly moving in this direction.
3. Enhanced systems : Models, especially hosted LLMs, need to retrieve enhanced builds to provide ideal results. This requires a series of supporting decisions, including
Data and metadata extraction : How to connect to structured and unstructured enterprise data sources and then extract the data along with metadata such as access policies.
Data generation and storage embeddings : Which model is used to generate embeddings for the data. And then how to store them: Which vector database to use, especially based on required performance, scale and functionality?
There is an opportunity to build an enterprise-grade RAG platform that removes the complexity of selecting and stitching together these platforms:
1. Operational tools: Enterprise IT will need to build guardrails for engineering teams, manage costs, etc.; all the software development tasks they handle today will now need to be extended to the use of AI. Areas of interest to IT departments include
Observability : How will the model perform in production? Do their performance improve/degrade over time? Are there usage patterns that may influence the choice of application model for future releases?
Security : How to keep AI-native applications secure. Are these applications vulnerable to new attack vectors that require new platforms?
Compliance : We expect that the use of AI-native applications and LLM will need to comply with the framework that relevant regulatory agencies have begun to develop. This is in addition to existing compliance systems such as privacy, security, consumer protection, and fairness. Businesses will need platforms that can help them stay compliant, conduct audits, generate compliance certificates and related tasks.
Where are the opportunities?
Published:

Where are the opportunities?

Published: