/smstreet/media/media_files/2026/03/06/hydrolix-2026-03-06-01-25-25.png)
Interview with Tom Howe, Director of Insights Engineering at Hydrolix, on how companies and consumers should view bots as the AI landscape continues to change how we all shop online. Plus, he deep dives into how to identify the good, the bad, and the ugly bot types–and what to do with them.
“Knowledge over fear,” says Tom Howe as we start our conversation. He’s light-hearted about the unknown, which probably stems from his social sciences background. It’s that foundation that’s led him into a decade or so of integrated data sciences and now provides the expertise to bridge the gap between internal engineering and customers for Hydrolix.
/filters:format(webp)/smstreet/media/media_files/2026/03/06/tom-howe-director-of-insights-engineering-2026-03-06-01-24-24.jpg)
So, what does the merging of social sciences have to do with AI bots? Plenty. Howe breaks down the motivational energies from a company and consumer perspective to remove the stigma of fear around bots, going so far as to describe what’s good about them.
The Identity Crisis of Defining “Good” vs “Bad” vs “Malicious” Bots
For someone who's not an expert in bot detection, what's the simplest way to explain the difference between a "good" bot and a "bad" bot? Is the line always that clear?
No, the line can sometimes be very unclear. What’s worse is it’s all relative. A good bot can accentuate your goals in the marketplace and a bad bot can degenerate your goals and achievements. Bad bots can also behave in ways that aren’t understood or misleading.
There is a third type of bot. Malicious bots are outwardly trying to get you. Their primary goal (what they are programmed to do) is to sabotage your website or system and sometimes people confuse malicious bots with bad bots. The major distinction is that there can be opportunities for bad bots to become good bots, by AI or data engineers influencing the bot behavior. But malicious bots cannot become good.
When you talk about bots exhibiting unexpected behavior, what does that actually look like in practice? What are companies seeing in their data that tips them off?
The most noticeable bot detection is that they hit sites in a different way than a human would. The patterns that humans typically use as they click, browse, or navigate page views is currently not replicated by any bot. If there are similar workflow patterns, it may be a bot vs a human, but ultimately it looks too different at this point in time.
The data derived from actual user interaction on a website is what sets the precedence for what may trigger a bot event. For example, how much time someone spends on a product page (infrastructure level) and domain specific page (understanding what the page means) separates bots from humans.
Other ways companies can be tipped off is by a certain set of behaviors that are uncharacteristic, like all activity coming from the same location or IP address or actions that don’t make sense for the context of the website.
How much of understanding bot behavior is about the technology versus just knowing what questions to ask of your data in the first place?
20/80.
This seems complicated, and it is, but the simple answer is that it really depends on the perspective of the person reviewing the data the bot is logging. Within a company, there are different motivators for employees, even different motivators within departments between employees. Individuals can also classify data differently depending on how they’re categorizing it.
An example can look like this: From the perspective of a marketing manager, bots crawling a website in order to find deals means my marketing must be working because a consumer is shopping. However, from a business leader POV, this may lead to inaccurate website data or missed revenue.
In order to really grasp what bots are doing, you have to recognize what you may value in behavior versus someone else, and also look into outcomes, not just immediate actions.
The Cost of Blanket Blocking and the $5M Mistake
We've heard about an enterprise SaaS company losing $5 million dollars by blocking what they thought were bad bots. How common is that kind of costly mistake, and what's your reaction when you hear stories like that?
Honestly, it’s way too common and at the same time, more common than we know. Companies under-report because of optics and bad PR. It doesn’t look sophisticated to solve a potential problem by just stopping it, especially if that problem turns out to not be a problem at all. In the retro of the SaaS company, leadership acknowledged that it wasn’t one day of lost sales, it was a long-term de-indexing crisis that lasted days.
If this type of “solution” is ongoing, it becomes more difficult to identify and leverage bots correctly. Like I said, sometimes “bad” bots aren’t actually bad and since “good” bots also exist, you could be blocking search engine crawlers or other beneficial bots. Plus, it can take months for a company’s SEO ranking to bounce back after this type of algorithmic erasure.
Can you walk us through how bots can actually increase sales? This is a mindshift from thinking all bots are bad. How does that work in a real-world scenario?
We have search engine crawlers from Google or BingBot, etc., and when a company blocks bots universally, they can lose all traffic that could be derived from this method. (See $5M loss from above). Bots need to be handled differently and more precisely. The goal isn’t a complete blackout, it’s reducing potential damage. In order to do that, it takes finesse. Use a scalpel, not a machete, so you don’t cut off more than you expect.
A bot, if not malicious, is genuinely trying to improve something for someone, whether it’s from the POV of the customer or company. An example of this could look like multiple profiles coming from the same network with the same request. Instead of blocking it, an automated 10% off coupon is triggered that actually exploits the bot in order to help the user and the company. This in turn can lead to increased sales and long-term customer loyalty.
And this brings me back to my sociology background. In a way, bots help to manage users by manipulating their behaviors.
A lot of companies default to just blocking everything that looks remotely suspicious. Why is that approach potentially leaving money on the table?
This plays into above, but overall, bots that play by the rules, need the rules to be in place. For example, the robots.txt file can tell an SEO bot where to look or help Google index a company’s website. 100% bot removal is completely removing any tangible good benefits of bots.
There also has to be a line, stopping where traffic is suspicious can also eliminate real traffic, which can lose sales or MAUs or users. Not everyone is using basic cable/fiber/5g and not all machines are Windows or Macbooks. Linux users can suffer from bot detection simply by being a different user agent than the norm, and that also sets off bot detection.
By relying on one-solution-fits-all detection rules, companies can punish real customers who may have tools like VPNs or specialized browsers in place. Not only does this lose business, but it changes the data companies count on, like conversion rates and growth metrics.
Hydrolix is processing terabytes to petabytes of data from CDNs and WAFs. What patterns or signals in that data most reliably tell you whether a bot is working for you or against you?
Knowing what the behavior of the bot is and what its desired outcome is is immediately necessary. Is it a lower price and then a sale or did they just scrape and steal your internal IP? The latter sounds really bad but in some cases companies have found ways to license their material to companies who are intentionally scraping, which can be a good thing for said company.
Again, this points to, what is the bots motivation and what is the goal of the company? Data scientists need to see if those align in order to distinguish between a “good” or “bad” bot, and from there decide if it’s working for or against your agenda.
The Potentially Friendly Future of AI Agents and Proxies
8. Dynamic pricing is a big part of how ecommerce works today. How are AI bots exploiting that, and why should companies be paying closer attention to it than they currently are?
If you understand the behavior of the bot, you can try to maneuver it to work for you. Bots can go into a site with a different profile than the user. This mostly works if you don’t need an existing account, but when bots can shop and visit a site more and more, their persona with the company builds and they’ll get a different response.
Companies want dynamic pricing because it drives physical and digital traffic. This is easily seen in respect to entering a prompt into AI and bots crawling sites to provide suggestions. If a company is blocking bots, the consumer will not receive that option, which could lead to lost revenue.
9. What's the first thing a company should do if they suspect bots are impacting their bottom line? Where does the investigation even start?
This definitely pertains to, it’s all about knowledge. First, understanding what the bots are doing and why you think they’re hurting your bottom line. Next, trying to figure out what the bots are trying to accomplish and what purpose are they serving? That’s really the only way to know if the bots are actually impacting the bottom line.
Bots are autonomous and they do whatever they’re programmed to do. Knowing the above helps to answer the next set of questions. You’ll need to know the breakdown of your costs, their benefits, and then take that information to the organization to gather other role’s perspectives on the bots’ activities.
The short-term strategy may be “block right now” to discuss and understand, but that’s not a long-term strategy. Once you know how to approach the specific bot issue, knowing how unique each bot is and learning what its motivations are, then you can come up with a thoughtful solution.
10. Looking ahead, as AI agents get more sophisticated, how do you see the relationship between companies and bots evolving? Are we heading toward a world where businesses are essentially negotiating with bots on behalf of consumers? Finally, how does a consumer know this is happening?
Increasingly businesses are recognizing that they are no longer marketing directly to humans, but rather to AI agents via “the bot.” I do see a future where it’s software-meeting-software. For example, a consumer shopping is really just one machine telling another machine what the consumer’s options are and negotiating on behalf of that consumer.
This will take some time as data scientists and AI engineers better understand how the bots behave in order to manipulate them into being a bot for good. This creates an education gap because typical consumers aren’t AI or data experts. Odds of an individual cracking the bot code is pretty close to zero and at present time, the only way a consumer would know is by monitoring their own digital footprint. There are some simpler ways, like utilizing incognito windows or relying on outbound browser traffic, but we’re back to a more advanced enterprise-level understanding of bots and pattern recognition.
It feels like we’re still at the forefront of this AI technology—all of us, consumers, marketers, companies, and data engineers. We’re learning as we go and the technology develops. But that’s why we don’t do things in haste or without the proper deep dives into the data. We don’t fear what we don’t know, we out-smart this new frontier with knowledge, and we father more every day.
/smstreet/media/agency_attachments/3LWGA69AjH55EG7xRGSA.png)
Follow Us