One of the biggest risk areas I see in organisations is a lack of awareness about how users actually use technology platforms - both approved and unapproved. Corporate helpdesks are there to (kinda) help with Microsoft 365, SAP, and a dozen other major platforms but they are purely reactive to incidents, rather than seeing how they are really used, and what their employee users do to work around systemic failures or gaps.
In these gaps lie the vulnerabilities of employees installing unmanaged add-ons or simply entering data into their BYOC of choice.
In a world where new agents/bots are activated by the platform vendors inside company tenants without any internal IT management awareness or controls, how are users supposed to assess real external threats? How does a bad bot look compared to a good bot?
One of the biggest risk areas I see in organisations is a lack of awareness about how users actually use technology platforms - both approved and unapproved. Corporate helpdesks are there to (kinda) help with Microsoft 365, SAP, and a dozen other major platforms but they are purely reactive to incidents, rather than seeing how they are really used, and what their employee users do to work around systemic failures or gaps.
In these gaps lie the vulnerabilities of employees installing unmanaged add-ons or simply entering data into their BYOC of choice.
In a world where new agents/bots are activated by the platform vendors inside company tenants without any internal IT management awareness or controls, how are users supposed to assess real external threats? How does a bad bot look compared to a good bot?