The expressed views of the contributors of the entrepreneur are their own.
AI, although it has been established as a discipline in computer science in a few decades, became a buzzing with the emergence of generative AI in 2022. Regardless of the maturity of AI itself as scientific disciplines, the large language models are deeply immature.
Entrepreneurs, especially those who do not have technical background, are eager to use LLM and generative AIS as activators of their business andavory. Although it is wise to use technological progress to improve business processes, in the case of AI it should be done with liability.
Many business leaders are now powered by hype and external pressure. From novice founders looking for funding to corporate strategists, they present innovation programs, instinct is to integrate the top AI tools as quickly as possible. The plant to integrate is overlooked by the critical shortcomings that lie under the surface of generative AI systems.
Related: 3 costly errors that companies make when using gene AI
1. Models of large languages and generative AI have deep algorithmic disorders
To put it simply, they do not have a real understanding of what they are doing, and although you can keep them on your way, they often lose their thread.
God thinks these systems. They predict. Each sentence created by LLM is generated by a probability estimate of the token-token on statistical samples in the data on which they were trained. They do not know the truth of deception, the logic of deception or the context of noise. Their answers may seem authoritative, but are carelessly incorrect – especially in operation outside known training.
2. A lack of responsibility
Incremental software development is a well documented approach in which developers can track back to requirements and have full control of the current situation.
This allows them to identify the root causes of logical errors and take corrective measures while maintaining consistency throughout the system. LLM is developing gradually, but there is no idea what caused the inconvenience, what their last status was or their current state.
Modern software engineering is built on transparency and traceability. Each function, module and addiction are observable and responsible. When something fails, protocols, tests and documentation lead developers to solve. This is not true for generative AI.
The weight of the LLM is tuned through the OPAQUE PRO process that causes black box optimization. No one – even developers behind them – can determine what specific training of training caused new behavior. This makes tuning impossible. It also means that these models can degrade or move performance after editing cycles without any audit trail.
For business depending on the accuracy, predictability and adherence to predictability, this lack of responsibility would have red flags. You cannot control the LLM internal logic version. You can just watch it.
Related: A closer view of the advantages and disadvantages of business
3. Attacks with zero bottom
Zero -day attacks are tracked in traditional software and systems, and developers can fix vulnerability because they know what they are creating and understand the fault procedure that has been used.
There is no day in LLMS every day, and no one needs to be aware of it, because there is no idea about the state of the system.
Security in traditional calculation assumes that the fibers can be detected, diagnosed and repaired. The attack vector may be new, but there is a framework of the answer. Not with generative AI.
Since there is no determinist code base for mostly logic, there is also no way to determine the root cause of exploitation. You know there is a problem when it is visible in production. And until then, reputable or regulatory damage may be.
Due to these important ones, the following steps should take the following steps that I will say:
1. Generative use of AIS in quarantine mode:
The first and most important step is that entrepreneurs should use generative AIS in quarantine mode and never integrate them into their business processes.
Integration means never connect LLM to your internal systems using their API.
The term ‘integration’ means trust. You believe that the component you integrate will permanently consist of maintaining your business logic and not scoring the system. This level of trust is inappropriate for generative tools AI. The use of the LLM API directly into databases, operations or communication channels is not only risky – it is reckless. It creates holes for data leaks, functional errors, and automated decisions based on incorrect contexts.
Instead, consider LLM to be external insulated engines. Use them in quarantine around their outputs can be evaluated before any human or system.
2. Use human supervision:
As a Sandbox tool, assign a human supervisor to invite the machine, check the output and deliver it back to internal operations. You must interact between the machines between the machines between the LLMS and your internal system.
Automation sounds efficient – until it is so. When LLM generates outputs that go directly to other machines or processes, you create a blind pipe. There is no one to say, “It doesn’t look good.” Without human supervision, the only hallucination can also wave into financial loss, legal issues or misinformation.
Human-in-the-mail model is not a guarantee.
Related: Fastened intelligence submerged models of large languages: unlimited option but continued with caution
3. Never give your business information to generative AIS and assume that they can solve your business problems:
Treat them as stupid and potentially dangerous machines. Use human experts as requirements to define business architecture and solutions. Then use a quick engineer and ask questions specific to the AI machine on implementation – function according to the function – without revealing overall purposes.
These tools are not strategic advisors. They understand business areas, your goals or nuances of problem space. What they generate is a linguistic struggle of the pattern, not a solution separate in the intention.
Business logic must be defined by people based on purpose, context and judgment. Use AI only as a tool to support implementation, not to design a strategy or ownership of a decision. Treat AI as a scripting calculator – useful in parts, a goal never in charge.
In conclusion, generative AI is not yet ready for deep integration into business infrastructure. His models are immature, their behavior opaque and their risks misunderstood. Entrepreneurs must refuse the hype and accept a defensive posture. The cost of abuse is not only ineffective – it is irreversibility.
AI, although it has been established as a discipline in computer science in a few decades, became a buzzing with the emergence of generative AI in 2022. Regardless of the maturity of AI itself as scientific disciplines, the large language models are deeply immature.
Entrepreneurs, especially those who do not have technical background, are eager to use LLM and generative AIS as activators of their business andavory. Although it is wise to use technological progress to improve business processes, in the case of AI it should be done with liability.
Many business leaders are now powered by hype and external pressure. From novice founders looking for funding to corporate strategists, they present innovation programs, instinct is to integrate the top AI tools as quickly as possible. The plant to integrate is overlooked by the critical shortcomings that lie under the surface of generative AI systems.
The rest of this article is locked.
Businessman+ Today for access.
(Tagstotranslate) Science & Technology (T) Line (T) Innovation (T) Technology (T) Artificial Intelligence (T) Cyber Security (T) Tools AI (T) General