More episodes
Telemetry Now  |  Season 2 - Episode 18  |  October 25, 2024

Getting Started with Using LLMs in Network Operations

Play now

 
Explore how large language models (LLMs) like GPT and Llama are reshaping network operations. Ryan Booth joins Telemetry Now to discuss the practical applications of LLMs, from querying complex telemetry data to facilitating advanced analysis. Learn about current use cases—including natural language querying and automated workflows—and emerging trends like agentic AI for network automation. We also discuss challenges such as hallucinations and real-time data handling and offer some advice for integrating LLMs into your network operations today.

Transcript

The networking industry has been kinda suspicious of using large language models for serious network operations. I've seen it. People are concerned about hallucinations and the real value that LLMs can add to what we do. And so, yeah, I mean, we've seen value in using ChatGPT or whatever your preferred LLM is to generate some Python code or maybe a router configuration. And I think that's actually a great use case. But what else can we do? And, really, how do we even get started?

Today, we have returning guest and my good friend Ryan Booth to talk about some of the practical use cases for using LLMs in network operations, and that's including the important concepts, the specific models, specific techniques, and methods that we can use to get started literally this weekend. And, yes, you can get up and running that fast. My name is Philip Gervasi, and this is Telemetry Now.

So Ryan, thanks so much for joining again on the podcast. I don't know how many times you've been on now three, four times, but it always is a pleasure talking to you. And I really appreciate talking to you, not just because we're friends, but also because, in different ways, we're sort of tracking in a similar direction, obviously, from different angles, in this industry, in this journey with, with AI, and ML.

And, you know, you have a lot of opportunity. You've had a lot of opportunity to get really deep into the weeds, and that's one of the reasons I love talking to you other than just, like, chatting and getting to to catch up, of course, and that kind of thing. So so today, I really wanna talk about how how we can get started using LLMs, more than just I've experimented with chat GPT or or, you know, name another model. Right? I've experimented with with Claude or whatever. And that's cool. And and there's use usefulness there.

But how how do we get deeper now? What is the next step, or what are the next steps? What does that look like? Maybe what does it look like for us in tech and networking? And, and so that's that's kinda where I wanna go today.

Yeah. Absolutely. Let's let's dig into it. This will be fun.

Appreciate it again having me on. I always always enjoy talking to you. And right now, I enjoy talking AIML, so it's it's working out for everybody.

You know, it's it's one of those last year and in the past, twelve, sixteen months or so, you know, the the general the general recommendation for everybody is is like you said, just jump into Chad GTP, Perplexity, any of these AI chatbots out there, and start learning how you can interact with them. Start learning how you can use them or where where can they add value in in whatever you're doing in your day to day or even work.

And I think for the most part, the large bulk of people out there have really done that, and they've really embraced it. And it's it's become a, you know, assistant that sits beside you almost every single day as you're working. I know it is for me.

But now is the time where we we've kind of learned how we can work with it, what we can do with it, but it's a very manual process. And going into one of these chatbots, you get output.

And the best you can do with this output is, you know, recreate it yourself to to put it in somewhere else or to, copy and paste. And that doesn't necessarily work for any type of workflow that we would have in the workplace.

And so now is a matter of time or it's now is the time where it's we we start building these workflows.

We start understanding in the weeds how, Chad GTP and these these other chatbots actually process through the information to get to the output.

And the reason to start doing this is because we need to start replicating these workflows that we want and and figuring out where to plug in LLMs, where to plug in natural language processing to to handle our workflows for us. And there's a lot of tools out there. I always say that, twenty twenty twenty three was the year of, retrieval augmented generation or RAG, where everybody started learning about that. Twenty twenty four has been a hundred percent about agents.

Agents have been everywhere and how to leverage them, how to use them, and they are very powerful, very powerful to use. Yeah. It's just a matter of getting familiar with how to put that stuff together. And I'm not just necessarily saying from a coding perspective because there are way more than plenty of no code, low code solutions to be able to do this.

And so, you know, it can very much be a a clicky clicky with your mouse instead of writing code.

Yeah. Yeah. For sure. I've noticed the same trend. Twenty twenty three was the year of rag, big time, and, twenty twenty four, the year of agentic AI.

But I will say that I still see among, tech influencers and the talking heads and all that, in in our various communities, a little skepticism about the the value of, AI, but specifically large language models and and maybe generative AI from a larger perspective in network operations, IT operations, little, you know, a healthy skepticism. We should always have that and really question, you know, is there is there value here? What is this costing me in time and resources? Does it add any value to me or my team?

And if not, you know, why why am I making this, or or, you know, spending all this money and time? And I think that, it took some time, but I I do believe that we have a couple of clear use cases, maybe more than a couple. And and from a high level, maybe we could start with that instead of, getting right into the weeds, which I do wanna discuss, especially how we can leverage LLMs beyond just asking them questions and getting config snippets, but using them in a larger workflow, which I I do think is one of the use cases. So so with that being said, I personally believe that one of the other than what we've been using it for now, like, a coding assistant.

Right? That that's a clear use case to to help you in your personal, workflow in creating Python scripts and creating, like, some code, configuration snippets for your Arista or Cisco, you know, devices. That that's fine.

But going beyond that to where we say l LLMs are really adding deep value to network operations, I I think this idea of, being able to query, interrogate, data, very, very big databases with very diverse data. Think about flow logs and, you know, metrics that you derive from eBPF and from, you know, just regular device metrics from streaming and the metadata, like your site names and tags that you have flying around, all that very diverse data. And then a lot of it, especially when you consider adding in config files and maybe your ticketing system, so all the text based data, syslog messages, emails, what what whatever it happens to be. That's that is what you and I, Ryan, and many of our listeners over the years have mined through to solve a problem.

And it takes a lot of time, and it's intuitive. It's domain knowledge that is inside of your brain or or inside of, like, the team lead's brain. And so it takes hours, days, weeks of back in the day Webex calls and now Zoom calls to resolve that problem.

And so I think one of the use cases, a major use or a major benefit to network operations is, being able to do that. Query, interrogate data dramatically faster, more efficiently.

I mean, you and I can do it. We can do it. We can get a team, and we can we do it. So it's, in that sense, augmenting an engineer.

There's no real analysis there yet. That's not really what I wanna get to in this first use case, but I think we could start there and say that that's a big deal. You agree? Yeah.

Yeah. Absolutely.

Hundred percent, I I think, you know and there's there's there's even, simpler approaches to handling this before, you know, you go full tilt and and you build out a full stack of of applications.

Right. Right. Yeah. Yeah.

The the point I kinda wanted to make first about, you know, the trust and the reliability here, I I a hundred percent agree that that is valid. I do feel we're in this bubble of AI, where, you know, everybody's been promised the whole world and a hundred percent automation of their job. And, you know, let's boil the ocean and then do everything. And I don't think that's reality.

It's it's one of those correlations I draw back to the early days of network automation when we we had to really just grind through the trenches to figure out, applicable use cases.

Some work, some didn't, but it took trial and error to really figure it out. And that's that's a hundred percent what we gotta do here is we we gotta take this stuff. We gotta start putting it together, giving it a trial and say, hey. Can can I can I do this with my logs and get this type of output from it? And we learn from those, and we push forward. And we're gonna find areas that AI is not gonna be a hundred percent valid and usable, and that's okay.

You you step back and you look at, the automotive industry. I I'm I'm trying to dig deep with a lot of correlations between how the tech industry will go through AI and how the automation or the automobile industry went through, automated manufacturing lines. And, you know, there was a lot of there was a lot of people at the time that were fearful that it's gonna be robots taking everybody's job. No one's gonna be building these cars anymore, and how do they do this?

Blah blah blah blah blah. And you look at it now, and while the large bulk of the automation or the the manufacturing line's been automated, there's still workers there. You know, there is still a union presence with with workers on the floor doing jobs that make more sense for them to do. Yeah.

And I think we're getting there.

Yeah. Yeah. For sure.

And I think we just we we gotta work through it and figure out where we go.

Now I you know, simple use cases, where do we start?

You know, with the logs, it could be very simple as, you you output your logs in some form or fashion at some schedule, like, every morning. This is what I used to do way back in the day as an entry level engineer, is I'd sit down with my morning coffee, and I would scroll all the logs that I could get a hold of, router logs, switch logs, firewall logs, everything, and just kinda scroll through them and get a daily sense of how the infrastructure lived and breed through logs. Yeah. And so it very well could be as easy as you grab those logs, you export them into a CSV, and then you drop them into to, an agent and have it start correlating and picking out anomalies or have it, you know, tell you certain events, like how many interface flaps did you see over in these logs and simple stuff like that.

And then it outputs to you. And you just basically like I mentioned at first, you you take those values that are outputted to you, and you put them in a spreadsheet or you put them in a report, and you share them. And so that's something you can easily do in in a matter of fifteen, twenty minutes instead of, like you said, really grinding through those logs and putting that stuff together over hours or days.

Yeah.

And that's a relatively simple use case.

Yeah. You know, the the thing is, Ryan, I think we can go even simpler.

Obviously, it's not a a perfect method, but if I don't have the ability or wherewithal to even do that much, considering the size of, a lot of the models context windows getting larger and larger, when it used to be very limited, you know, you can drop in a lot of logs into your prompt and then literally as part of your input and your request, your query is, you know, do some summarize these, what whatever it happens to be. So you can drop that right into your prompt and then, of course, craft your prompt, your your query, your question, your, your task that you're asking the the LLM to do properly. Right? So there is some, you know, context context that you wanna add. This is the result. This is the form of the result you want back.

And, that that's I I know people are already doing that, but that is one easy way to get started. Just Yep. Query in your database. Just drop that right into the prompt.

Now, I mean, some of the skepticism comes from incorrect results or some people call them hallucinations. Right? Just Yep. Things that sound good but are factually wrong.

And so I think part of the skepticism is, you know, we discussed, does it add value? Oh, we still need to find places, and I believe that querying data is one. But another one is, does it per is it reliable enough? Right?

And so, as we as we talk further in today's episode, I know we're gonna discuss some of those, mitigation techniques, that both give us those relevant answers that we want, but also reduces the incident, or incidents of, of hallucinations as we go. Yeah. So, yeah, one way to get started is to just drop drop those log messages into the prompt and then and then add that. Again, considering the the size of context windows is much larger with many models now, I don't remember.

I think Anthropic is, like, two hundred thousand tokens or something like that. It's it's very, very large.

And then, of course, there is a, a cost associated with some of these things, some of the foundational models. You might have an enterprise account that your company has to pay for and things like that, but, certainly, that doesn't detract from the its actual value.

Yeah.

Yeah. I did you know, even even to that note right there, if if if you really have something like the use cases we're talking about here or even if you're just wanting to play around, if if if you have a halfway modern laptop, it's it's relatively easy to download any number of applications out there and run your own model locally.

And it it can take a very quick front end to build, and you got a chatbot that's running on your local machine, or you can just drop down into the shell and and run it through CLI and paste it in there.

What was the reason I would do that overuse a foundational model with, like, a, you know, an API that I can I can use an open API? You know, what what would be the, in your opinion, the benefit of doing that versus, a publicly available, you know, what some people are now con considering, commodity models. Yeah.

You know what I mean?

Well, it, you know, it it kinda depends on, your your specific work environment or your specific environment you're working in.

If if you are working for a larger company or a company of any size that that really tries to control and lock down how much you can use, that type of stuff, or if you even can use the Internet stuff for, for sensitive data, then firing something up locally, allows you to be still be able to do that, but work within the confines of what your corporate overlords expect you to do and not do.

And running a model locally doesn't it means that nothing's hitting the Internet. You can shut your Internet off, and I actually have people that do that, or I I don't have people. I I talk to a lot of I talk to people that when they when they work with sensitive data within their company or they don't trust, the models they're working with, they they straight up just go into airplane mode on their laptop, shut down wireless and everything, and then work with the models locally to do that.

Mhmm. Okay.

And it's perfectly viable there.

Now some of the stuff you don't necessarily get the full value of of an the agentic workflows that you see in some of these, chat bots like chat GTP or Claude or Perplexity. You don't get those steps. You kinda gotta handle those yourselves, but that's okay.

Right. Yeah. Yeah.

So the, a front end application or interface, what would you recommend folks check out first if they're Yeah.

So there's there's two that I I highly recommend right now.

There is Ollama.

Mhmm.

And that is a a way to be able to host locally a a lot of Ollama based models and others as well.

And they have a complete registry where you just, you know, you just do a a a model pull, and it downloads the entire model. Right. And then you say start it, and it starts the model, and you're dumped into a shell to start asking questions.

It's a very lightweight package or even a Docker container that you just fire up and you go to town.

The other one, Ollama is really good when you want to start building integrations.

It integrates very, very well with packages like LlamaIndex and, Langchain. Mhmm. And so it does really well there. But as in a a no code, low code solution, it's it's not necessarily the best.

There is LM Studio, which I recommend in that case. If if you want something to be more of a user interface that you're familiar with, you don't necessarily care to do deep integrations with programming, l l m LM Studio is a great tool for that. And LM Studio also exposes a bunch of the nerd knobs of models that you can tweak and play with and and figure out what you wanna do and and set it on fire and and let it watch it go down.

So, I haven't heard of the second one, LM Studio, so that's really good.

Yeah. Olama is what I use prime primarily. And for the audience's sake, it's o lama with an o. Yes.

Distinctly different from llama, l l a m a, the model. Correct. The the large language model. So, though, of course, there's they're they're related.

But, o lama. So Google that, look it up, and check it out, and it does take five minutes to get up and running. That's not an exaggeration. That is not hyperbole.

It is that quick and easy, and a great way to start piecing these things together and experimenting. You know, something I wanted to throw in there is that it's also not necessarily is it necessary to use one of the large closed foundational models? You don't need to use GPT to experiment and do this stuff. You don't need to pay a subscription.

You don't need you you don't even need to use some of the open ones that are really big with hundreds of billions or even trillions of parameters. You can run something very small like Mistral or Mistral seven b, something I'm experimenting with right now. And there are advantages and disadvantages to that. Like, you can you can fine tune Mistral seven b locally, whereas you might have a lot of difficulty doing something like that unless you have your own GPUs or using GPU as a service somewhere if you're using a larger model, a larger open model that is.

So there's a there's a lot of, a lot of that that you're gonna find, in Hugging Face where there are hundreds of thousands, if not, like, a million models. Yes. Some of them are, like, copy and paste models where it's, like, make they make an adjustment to some parameters and low and behold, you have a new model.

But the spirit of what I'm saying is that there are a lot of options of models to play with, and some might be better suited for your domain and your purposes than others. Yep. So when you are yeah. Yeah.

When you're asking chat GPT, hey. Tell me about this stuff, you know, whatever your question is. Mhmm. It remember that it was trained on the Internet.

So that includes Wikipedia pages on, like, you know, the the history of Italy, but it also includes, you know, Cisco documentation that's publicly available and Juniper documentation is publicly available and, like, Ryan Booth's blogs. Right? They're all it's it's it's aware of all these things. And so what happens is you end up with a a semantics framework and engine to understand, quote, unquote, understand whatever that means, language, and then be able to produce language, hence generative AI.

Right? It generates the response in language form.

The fact that it was trained on Cisco docs and Ryan Booth's blog means that it knows stuff about networking. Great. But it doesn't know, like, your domain knowledge. So I think one of the the the things, Ryan, one of the issues that people have is, like, oh, it gets stuff wrong about, you know, networking or this.

Well, it's not it wasn't trained to be a network expert. It was just trained on publicly available knowledge, including networking stuff. Imagine using it primarily for, like, if you're a chef or a lawyer, whatever. Choose a choose a field, and it gets some stuff wrong once in a while.

We wasn't trained for that. Yeah. So there's a big difference between experimenting, and then we're we are talking about getting started today, between experimenting with a huge foundational model, which is awesome, by the way, and there's a lot of value. You might wanna even consider them general purpose models.

And then a a much smaller model, which is trained on has far fewer parameters, easier to work with, lightweight, like you said. And, often, they're trained on domain specific data anyway. So an interesting thing that I found is, you know, like, when we talk about model underfitting and overfitting, those are bad. But in a sense, they can actually be good in the sense that, you know, your your model was trained on very, very specific domain data and therefore can answer questions accurately about that about that stuff. So yeah. And and I mean and I'm not talking about RAG or external databases. I'm just talking about a small model that was purpose built for that knowledge domain.

Yeah.

The the the example I like to use with this for people is you can go out and you can spend hundred, hundred and fifty bucks, sometimes more on a Swiss army knife that has tons, tens, tens of, different blades and different tools and different things on it that can get the job done. And it covers that wide range, and that's where you add it where it adds its value.

But if you need a shave, it's probably gonna slice your face up pretty good.

So why not just go spend ten, fifteen dollars on myself? I shave with a straight razor, so just go buy a straight razor, and you're gonna get the best shave ever because it's it's the right tool for the right job. Yeah. And you don't have to be overly blown, blown out with larger models that can handle everything.

You need it to handle your specific function. And we are seeing more and more as as these workflows and these agentic applications progress that it makes more sense to find a model that a a small model that works locally as either a docker container alongside the application or wherever you're gonna place it. Just a smaller model that can handle your API docs and translate them into, you know, natural language for people or or be able to pick out the right API to use when when you need to do task eight. So, yeah, it absolutely makes sense there, and I think that's a progression that as you know, we should all get comfortable with running local models to understand what we can and can't do with them.

Yeah. And it's a great way to get started because you are running things locally, and you can break things and experiment and tweak things in a way that you not you can't necessarily if you're if you're hooking into a foundational model. Again, those are still awesome tools, but it does open the door for you to experiment more, both for learning and also understanding value and Yeah. And and experimenting.

So that's a that's a thing. You know, I'm looking at the results of a lot of the experimentation. There's a lot of it's from, like, the Hugging Face website forums and stuff like that. You know, I'm looking at that and some other other people that I follow online, and they're saying, look at this tiny model.

But using these, secondary techniques, not just rag, but, like, look at the choice of vector d database that we use. In order to reduce latency in the query, we use this vector database instead of this database. And and all of these options in building your entire system that you you have to consider other than, like, let's just put it into chat g GPT. You know, you you consider these things when you're building this out with a small model and using other techniques, like, perhaps, I I've been researching Raft recently, retrievable augmented fine tuning.

Very interesting. I've never done that, but I've I've been reading about it and watching YouTube videos.

And, and lo and behold, the results are just as accurate, if not more accurate, than the foundational models because you are layering these tools together. I'm not talking about, tool calling, like, with agents, but you're layering these tools together in a workflow that ends up with the result that you want. So experiment with that today, and and you could do it easily and quickly. Like Ryan said, you can get Olama up and running very, very quickly and start plugging these things in and seeing what kind of results you get based on a CSV that you drop in there or that you point the LLM to. Right?

And and something as simple as that. I just use fake synthetic, you know, synthetic data, fake data. Right? Literally in CSV.

So I kind of format my, what looks like a spreadsheet's tabular data. Right? And I get my headings for, CPU utilization and make sure there's time stamps and simple things like that. That's it.

It's all made up, by the way. Just to experiment with, like, can I get the answer about, like, what is the average CPU over the past three days for this device? Can it actually locate that one router?

And it works. It works. It's not that hard. So it's pretty neat. Then that's a use case that I found is is pretty cool, because it's so quick to to get up and running.

And folks are, you know, comfortable, I think, by and large, working with simple tabular data, whether it's, an SQL query or, you know, data frame and you're using Python to do it, you can do a lot of this stuff pretty quickly.

Yeah.

You you absolutely can.

It's it's it's it's helpful to also note, you know, in this realm as you're trying stuff like that, you know, build out multiple, possibilities.

You know? Is a vector database the right approach, or is, fine tuning the right approach?

And it you know, at first, it's a relatively straightforward conversation, we can have right now. We we can discuss the pros and cons of each, and and we we know what we know, and and it it comes out as it comes out.

But then once you actually start building, we start discovering what we don't know, and we start discovering the stuff that the experts and the data scientists and the PhDs, what they don't tell us, and actually implementing this stuff in a real world.

Mhmm.

And so those are the things we need to start pulling out. And I I I correlate it back again to the to the network automation days in the early days when all of us were like, oh my gosh. We gotta automate all the things. We need to throw Ansible and Python and do everything. No one needs to be touching the CLI anymore. Let's go do it. And we all charged forward.

Management, leadership, everybody was like, yes. We need to do this. And then six months into the whole process, a team of network engineers is sitting around like, damn. We don't know how to write write software.

Or now this is getting unstable, and every time we make a change, we break something else.

Well, yes. You're you're now introducing yourself to the field of software development, and we didn't realize that's what we were doing. We just had charged into, you know, what we wanted to do, and that's the exact same thing here.

Mhmm.

Our software development teams, the ones that I'm exposed to, the ones that I'm around and I talk to, they don't have a full grasp of what it takes to build this stuff. We we're figuring it out. And and what does one trade off do to you versus another?

Oh, yeah.

Absolutely. And and same thing with network engineers in general. You know? We we've gotten strong with understanding how to integrate software engineers into our teams, but none of us know how to integrate data scientists into our teams. We're gonna go through that again.

So I'll I'll give a tangible example from something that I've learned myself.

Hopefully, it's it's it's simple enough, to to to take something home with from it. So when when I first jumped in, I went into that exact same discussion that that we brought you brought up a little earlier. Should I go vector database, or should I fine tune?

Fine tune looks really cool. It really makes sense to get in and and tweak a model so that we have a model with our domain expertise, and we can ask it more rich questions.

That's great.

But what happens, the second you start doing that, you then just introduced a data science training workflow to your application and to your DevOps practices.

And and I can safely say most people probably have not built a a a fine tuning workflow for a production environment.

And the criticality of that is is not necessarily once you do it the first time. Everybody can get it done the first time, kinda like parachutes. You know? It's it's it's it's not the parachuting the first time that's gonna get you. It's, you know, landing. That's the critical piece.

Yeah.

And, you know, you cannot everybody can parachute once, but it's being able to parachute twice is the big thing. And so with these workflows, you know, once you can you you have to understand how to start introducing repeatable, predictable outcomes, and that gets complicated. You you have a dataset of information that's that's been updated with a handful of new lines that that introduce new, new logs that come in that haven't been seen before, or or you have new commands that are coming in that you wanna be able to push against a router. All these small tweaks affect how you do fine tuning and training.

Mhmm. And so any one of those done wrong can turn around and blow up your model and cause downstream effects when the user's interacting with it. So how do you mitigate that stuff? How do you handle that stuff?

It's not necessarily the most complicated thing to do, but it's not straightforward either. And so as a team, you gotta realize that when you do fine tuning like that, you then introduce a role for a data scientist to play. Mhmm. Who can fill that role? You? Can someone on your team do it, or do you now need to hire a data scientist to help you with these workflows?

Yep. Whereas with the vector database, it's a little more straightforward.

You don't have to do all the fine tuning workflows. You just have to normalize and serialize your data to pump into the vector database.

That workflow is a little more achievable for software developers once they get their head wrapped around it, and same thing for us network engineers.

And so that's that's where I see a huge contrast between the two. Mhmm.

You gotta understand what what you're setting yourself up for, and are you gonna be able to maintain it?

I do think that some of the small models can be fine tuned relatively easily. I say relatively because it does require some level of of expertise.

So you can use things like LoRa, right, to Mhmm. Mitigate, not mitigate, but to reduce the number of parameters that you're actually adjusting in your fine tuning.

And then, of course, fine tune a small model, and, and and approach it that way rather than try to take, you know, take on a huge, huge endeavor.

And then, of course, like I mentioned a couple times now, there's RAFT which kind of combines both fine tuning and RAG into this, this new method of reducing hallucinations and making, or incorporating domain specific knowledge.

But I think for getting started, for folks getting started, I personally would not experiment with fine tuning the model just yet. Absolutely. You can. You can watch a YouTube video on it, and you see some guy do it in forty five minutes. It's not Yep. It's not it's not that difficult.

But what, what I do when I watch those videos, I'm copying somebody. I don't have a deep understanding of what's going on. So like you said, I got it working. That feels good. I got the ping replies. Right? It feels really good.

But the reality is that was just kinda like a walk through kinda scenario. Now, oh, I have to start handling real time telemetry data? Well, alright. How do we how do we do that?

Do I have to update my embeddings every twenty minutes or ten minute like, how how do I do that? And if I do that, do I start introducing latency with this database instead of this database? Because I need to get the results back in real like, there there are so many other components. Once you got that cool, you know, high from getting it figured out the first time, there are a lot of components that we now we need to start considering if you're building out a more complex system.

But Yeah. I I do wanna say that should not preclude you from getting started. Get started by just, you know, you have your, your maybe a local model, whatever, and you're using it to interrogate a a local database of flow logs, and you're just pointing at it. You're not even using a vector d d yet.

You're not using an embeddings model to do all that cool stuff.

You know what? Why don't we talk about that? Why would I so so we start without using a vector database to keep things simple, but why would I use a vector database? And and, you know, I know some of the keywords and some of the concepts. Well, I wanna hear it from you from your experience, and I also want to understand maybe what the pros and cons are for network data because I'm already struggling with that, where sometimes a vector database is just not appropriate because of the different types of data that I'm handling.

Right. Right.

So so so for a standard, rag pipeline where most human interaction is gonna be through some sort of chat element, so the traditional chatbot out there. Yep.

The what needs to be done is the documentation or whatever the data is, the logs, it needs to be translated into an indexable way to to quickly search and find it based off keywords, and find the most, excuse me, the the most relevant data from that. And so that's where vector databases come in.

And they they allow the freedom to handle how you want to ingest and remove data into that vector database.

So if you have real time logs, you know, you can have a data pipeline that that pulls those logs and dumps them into a vector database and then then, rips them out over a certain period of time because you you don't need them after x hours or whatever.

And so and so that's that's doable. And then you you remove that burden from your LLM having to figure out what is relevant data into, you you lean on your vector database in a search, and you say, hey.

The user's asking for this information during this time frame. Go out and find all the most relevant data for x, y, and z features or these three routers based on the the prompt the user gave me. Find all that information.

Give me the most relevant information that you think about, and then let's get the LLM to actually do its intelligence.

So just like you would do a Google search, you go out there and you search, you know, how do I shut down a VLAN across my infrastructure?

And it pulls back all these, you know, pages and pages of results, and you take, like, the top five and you start filtering through it. So that's how the the workflow works. You you take the results back from the vector database.

Mhmm. You can either summarize them with one LLM, or you can just take that snippet, dump it into the prompt that goes into the LLM, and then you ask it.

Answer the question base answer the user's question based on only this data I'm presenting you. Nothing else you know, only this data. Mhmm. And that's where that RAG approach, that's where the that's where RAG actually is, is you're really putting on the guardrails of the LLM to say this is the only thing you can focus on. Yeah. And so that that allows it to give much clearer, responses.

It avoids the hallucination where it thinks, oh, yeah. When Steve was talking to Bob on on LinkedIn six years ago about the Netherlands, we add that with our router logs and give them a response. It it it doesn't Yeah. It doesn't work that way.

Although there is still a possibility for hallucinations even with RAG and a VectorDB because, sometimes the process of identifying semantic proximity, words or or data, that appear close to each other in, vector space and therefore relevant or or related aren't aren't actually related. And we know better as a human when we get the result back in law. So there is an element of there is a possibility for hallucinations with, RAG as well. What and I I am seeing like, I saw something from Anthropic they put out there about contextual retrieval. So they're oversimplification here, but they're adding a header to, the chunks so that now the headers can exist as embeddings in in vector, the vector database. So that way, the system can say what chunks of data, not just individual words, but what chunks of data, hundred hundred words, five hundred words, whatever the sizes, relate to each other. But we are talking primarily about text.

I Yeah. I've struggled with this with, when we get away from text and into, you know, just metrics and hard data like that, and and that becomes more difficult. I I for me at least, because I'm not a super expert. But I I will say that there is still a lot of value in just looking at tabular data.

And so I I, I saw this or I read this blog post where the guy called it, instead of a rag, he called it tag, tabular augmented generation, and it really was that. It was like just looking at tabular data. Right? And he showed different methods to do it that I experimented with, and it was it was effective.

It was very effective and, you know, obviously still a type of rag just minus the vector database portion.

Is it is that is that am I wrong there?

Is really the whole VectorDB, does it really lend itself much more to text and, and not as much as metrics and the data that we're used to in in networking?

Well, it it can be used in any scenario.

Okay. But it's one of those if you understand what you're trying to do with it, that's where it makes sense.

So if if you have a vector database of all this information, you know, tabular data, just raw raw text or whatever, And you're gonna take those results Mhmm.

And you're gonna ultimately push them through an LLM to to gain more insight or to summarize.

You're then asking an LLM to take non natural language data and convert it to language data, and it's it's not necessarily good at that anyways.

So if that's where you're going with it and you need to work with tabular data, you might need to introduce more steps into your workflow Mhmm. That that helps summarize the data that's being pulled back before you ask the logic. Because LLMs are great at that. They they can translate data into natural language relatively well, and so you you help with that.

So it it it does kinda, you know, it it it kinda depends on your use case, but I I I do a hundred percent agree. The the use cases that I work through that I struggle the most with is usually with structured data or semi structured data where it's a mix of, like, a JSON file that also has elements of, text throughout.

And so those those are areas to struggle, and I think those those are the ones that start becoming more of the the, advanced use cases.

Yeah. Yeah. Let's get into, the next use case actually because we're we're kinda dancing around it, which is doing more of a true data analysis.

A a large language model in and of itself can't do that. I mean, it's not applying a clustering algorithm or linear regression to data. It's not doing that, kind of predictive analysis, but it can absolutely be a front end, a a, quote, unquote, front end for us to interface with data where there is an element of of tool calling or, you know, whatever whatever you're doing to to, to to perform those discrete mathematical functions or whatever it is that you're doing with the data. Perhaps it's already preprocessed, by the way. Perhaps you did that, and now your vector database is already preprocessed data. No problem solved. But if not, you can use the LLM as the front end of of this kind of a workflow where these other things are happening, and then the LLM is your interface where you get the results back in human readable readable format or maybe as a visualization, however however you, you design your system.

Yeah. The the the first thing that, you know, that I bring up when when you're starting to look at this stuff, that I always see as critical is when you're trying to solve these complex problems or these complex workflows, it's the same advice we gave back in the audit and network automation is, well, how would you do it as a human?

Rip out the LLM and replace it with your brain, and how would you process that?

What tools around you do you need to make that happen? Where do you pull your data from? How do you transition one step into the next step? Is it Python?

Whatever.

And then you have a framework for how this should play through.

Now, you know, this is probably the only time that I I would agree to compare, LLMs to human beings is that's basically how they need to be treated when we start talking about these workflows and agentic rag workflows or just agent workflows is because that's their job. And and you you treat it more like you are a software engineer. Your job is to blah blah blah blah blah blah or you're a network engineer. Your job is blah blah blah.

And if you approach it from that angle, it it becomes clearer on when and where to put stuff.

And, you know, the very next thing also, especially when you're talking about tabular data, you're talking about structured data or even semi structured data. Yeah. Do you even need an LLM to do this logic?

Like, you know, with that type of data, it's usually relatively simple to handle some sort of logic just with Python. And so within within an agentic system, you you can do tools calling or you can write scripts that that help you massage the data to get it further down the line. And I think that's the key point that that's what you were bringing up is we're not just taking the data raw and dumping it into an LLM for an answer. It's a workflow that that produces the data, pulls it together, structures it, massages it, adds elements that it's missing for this workflow, and continue it down the chain. And that's that's what we're getting into.

Yeah. But my use case there was specifically using the LLM to facilitate that. So you're right that we we don't necessarily need to, of course, and we've been doing that without an LLM for a long time. But but I see this as a use case for folks where you now can interface with data.

Right? Maybe it's a feature in your platform or you built it yourself. And behind the scenes, the LLM is taking your prompt and saying, what do I do with this? How do I route this to which agent to which whatever.

You know? And then, or or maybe it translates it for you into a Python script. Mhmm. And and you have that program within the system.

You as the user don't see that or know that it's doing that, but the the, you know, you design a system to do that, and it translate it to a, a Python script. Right? And then, it goes out and, does something with whatever libraries you installed to to normalize some data, and then you get a, an average or a mean, whatever it is that you did, and then you get back the result, and you send that in your workflow that you built in in code. You built that workflow to do this, and it's gonna, bring that result back to the large language model, which it then sees.

And it has prior instructions as to what to do with it. Present this as a visual format or present this, you know, in human readable text. And, of course, the data, you know, was a specific database that you told it to go to. So all of that stuff is happening behind the scenes in a workflow.

That was a very simple workflow. Get a lot more complex than that. Might be that you're sending it to it the the LLM router sends it to an agent, which does a thing, the result of which goes to another agent to do another thing, and you have an entire workflow of tool calling. Perhaps one of those things is a web search or an internal database.

There's a lot of different things that could happen. Then bringing that result when you do have a final result back to large language model to to synthesize into a response, however you instructed this entire workflow to synthesize it in the first place. A lot of power there, but that is what I believe is the kind of the second pillar here of use cases is using the large language model as a really easy way to interface with data in a in a data analysis workflow. But it's not all about the large language model.

That's just the use case for it in the greater workflow of other AI tools. And when I say AI tools, they're just like ML models and, you know, web searches and things like that. So it's it's, you know, less AI and more, building out your own workflow.

Absolutely. Yep. Yeah.

One workflow, you know, to kinda to kinda give give some sort of real world use case of of kinda what you just talked about or what we've been talking about back and forth is you you have a vector database that that pulls in data from wherever. This can be documentation.

If if we wanna stick with the logging Okay.

Information, let's do that. And the the initial thought is, okay. Let's let's figure out where this data pipeline is and where we get this data from point a to point b, and point b being the vector database.

But to improve your workflows, add an extra step before you get to the vector database. And let's let's have an LLM, insert it into the data pipeline, and you pull all this data in, and it can be dumped into a vector database wherever, however you wanna handle it. And then have that LLM summarize each line of data. Add a new add a new column summarization or, you know, it's it's it's also like metadata. You can generate metadata.

Yeah. Yeah.

And so have the LLM summarize that data for you in a way that it is then meaningful for the search and add that extra element of depth to it. And so good examples, and I'm gonna have to go documentation here, is, you know, when you chunk up a document and ingest it into a vector database, each one of those chunks that gets converted into a vector really only has an understanding of its specific section of data in that document.

And, of course, that chunk can be anywhere from one sentence, two sentence, maybe even half a paragraph. But it doesn't have a larger scope understanding of what chapter it's in or or what subsection, things like that.

So then as you're dumping the data in, do that. Add a summary section that says, hey. This is coming out of section blah blah blah blah blah that's covering these topics. You have high level, search terms that are easily triggerable by humans and then by AIs. And so you add that information and you dump it into the vector as well.

That's actually been something that's come up over the past six months as as a promising approach Mhmm.

That the data scientist world has found to to handle these rag pipelines.

That was the contextual that was the contextual retrieval we were talking about.

Yes. Yes. Yes.

The paper I saw from Anthropic. Well, it was an announcement, not a paper. But yeah.

And so you can add data in there that that enriches what you have. Okay.

And then same on the on the the the back end of that process. Once the search is done, and I think I brought we brought this up a little earlier.

Before you take that information and dump it into your LLM to say, hey. Give me a good answer to their question with this. You might have one or two steps in there as well that introduce tool calling with LLMs or introduce, another LLM to to add metadata or to further refine it. Mhmm. And and that right there is kinda getting into, if if you're not messing with the data in any way, it's it's it's what's considered reranking, where you actually take your search results, how they were ranked by the vector database, and then rerank them based on another set of logic.

And so these these tools are what you build on top of your your pipeline to to further refine and improve it, like you mentioned. And so that's that's kind of some some examples of going through that.

And that process of reranking, I imagine, would just make the retrieval more accurate then. Not just It does.

You know, relevant, but I would actually get more accurate results. Again, reducing hallucinations and making it a more trustworthy and therefore useful system for me.

Exactly.

Yeah.

And and what you're trading off in that is, you know, you introduce re reranking. You're instantly adding latency to your response. Exactly. Same thing with the other steps in the pipeline. The more you add there, the more latency and how long it takes to get back. And so that's where you gotta you gotta keep that thing those in mind as you progress through this.

Yeah. Yeah. Something that, I have I I mean, I've always done this, but something that I've had to revisit in recent days is you really gotta sit and plan this out, pen and paper or on your computer, and it was it kinda map out what you want your workflow to look like with the result in mind. What what are you actually trying to do, of course?

What's your goal? What what's the result? What are the acceptable limits and everything? And and then plan that out.

And if you don't, you're just, like, kinda you got this, like, spiderweb of tools and ideas, and you're not, you know, like, you don't know how to make any sense of it. Plan it out, what you want it to do. And I don't mean, like, use this tool here, use this LMM here. I don't mean that, but, like, in function.

Right? And then you can plug those things in, and you can swap them out as you find that one Victor that one, like, Pinecone introduces too much latency versus Milvus versus, you know, I'm gonna try Llama three point two, and let's swap that out for a smaller model, which you can do. Yeah. And, and and thereby adjust those, pieces to get the result that's best for you, whether it's reducing latency, and having a little bit more room for accuracy or making sure accuracy is paramount and you're okay with a little bit of latency.

Like, Like, GBT o one. Right? We we wait forty five seconds for an answer, and then you look at the chain of thought, and it's like, what is two plus two? And you see, like, this ridiculous chain of thought, and I'm like, maybe you could have just come to that a little bit quicker.

There is some weird stuff going on there. But in any case, yeah, for for getting started, though, you know, writing it out might be a good idea. Write out what you wanna do, what, you know, what what functions you want to occur, and, and then start plugging in the tool. So that's a good way to get started because it gets your mind wrapped around all of the pieces and how they do fit together.

Because we are talking about getting started with this for real, not just throwing into ChatGPT.

Write me a script that does a thing, which is legitimate, and I use it for that kind of stuff too. And it's useful.

But going beyond that, it does require some forethought, some planning Yep. And it's fun. I love it because you're building something. And me working in marketing where I don't get to build stuff anymore, I love doing this because I get to build stuff and then fix stuff when it's broken, which inevitably is.

You don't have to start experience Go ahead.

Yeah. I was gonna say from my experience with building this stuff or or build helping people build out some workflows, to to to dip your toes in and to get started, I've I've found that I do much better when I replace one element of that workflow at a time. Oh, yeah. Right. Yeah. If if if you try to take a workflow and just build out a full, like, lane chain, pipeline, every single step has an agent, it's gonna be miserable for weeks. I I I take a workflow.

I have it handled manually or whatever the process is currently. Take one of those steps and replace it with, with an LLM agent or an LLM in general. Get that functioning how you want it and then move on to the next one, because that is also one of those that opens up. When when you do that, it opens up new areas that you might not have considered, and you gotta you gotta step through those. So, I I definitely learned that lesson the hard way, which is my my, I guess, my brain's way of liking to learn stuff is Yeah.

Right.

You know, beat the hell out of me and make it worse for me, but it it is fun.

It is often the best way to learn too. Yeah. I mean, muddle through and figure out what's going on. And, so, so let's go over some of the tools, concepts that folks should be familiar with and can get familiar with with a little bit of research that can help them get started like today because you can build some of this stuff. Some of the stuff that other people will think is so sophisticated, but you put it together in half an hour on a Saturday.

Yep. So let's talk about that. You mentioned Ollama. What is Ollama specifically for the audience?

So so Ollama specifically is a a simple application. Can probably I don't I don't know if you can consider it a framework because there's not much to actually interact with there.

Okay.

But it it's a way to actually run a model locally.

And so it's an area that we we a lot of us just kinda gloss over for so long is is like, well, how do you run a model?

Do do you double click it and it just starts doing its thing? Or or how do you do that?

Well, the model is is is just a very simple well, it's a it's a compiled model of of, you know, magic. Let's call it that. That you interact with through through, an SDK or or an API.

And so when you run that model, you got to be able to host it. You gotta be able to provide it memory, GPU, all the resources it needs to function, but then you have to wrap it in tooling to be able to interact with it.

How do you do shared memory? So one conversation handles, you know, a back and forth between the two and understands what were the the questions that were asked three questions ago, or what were the results. You gotta handle memory.

How do you interact with it from an API standpoint? Where where's the query? How how does the query come into it?

How how do you clear out cache and and long term memory so you don't overload the machine and and blow out its memory?

There's a lot of this stuff that has to be taken into consideration.

Same thing as well as, like, how how do I update the damn thing? I mean, there's no, you know, there's no about section to then click update on these guys, so you you gotta update them. And how how does that work? And so Olam is one of those tools that handles that for you. So then all you have to do is just say, Olam run, Olam three, and it does it. And then you just have a pretty, shell that you can drop into and ask questions, or there's now an API where you can integrate it into your workflows.

Okay. Yep.

And so that's where Ollama and l, LM Studio come into play as they handle that for you. Right.

Now can you build your own? Absolutely. It's probably a little more daunting of a process than most of us wanna take on.

It's one that's peaked my interest for a little bit, and I I kinda wanna do that.

But, yeah, that's that's kind of the gist of what what Yeah. Ollama and those tools provide.

Yeah. It's my recommendation that you start I mean, I think it's the best, framework slash application out there for that. Although I will check out LM Studio. That's something new to me.

And Ollama, you know, the Zip the Zip is what? A couple hundred megs, and when you un unpack it, it's like half a gig. It's a it's a small Yeah. Lightweight program.

It's not not a big deal, so you do run that locally.

What are some, language models that you recommend folks experiment with? The is because we obviously mentioned chat GPT and the GPT model, GPT four o and the o one. Is is that the go to, or are there other, models that you would recommend prior?

You know, I I'm not always the best person to ask that question to, because I I don't pay too much attention to that right now.

I mostly focus on building. I I mostly focus on getting everything else around it. And then when we need to need to sit down and look at what are the best models to work with, then then that's kinda where I go. But in general, Claude is kinda my default go to for anything. Right.

And SONNET is absolutely where I wanna push absolutely anything that does code generation.

GPT four o, I still throw stuff that way, but I think it's more out of habit than anything. I actually, you know, got rid of that subscription a couple months back, so I just don't use it like I do Claude and and Sonnet.

Perplexity, I I still I don't know how valid this use case is, but I I use perplexity when I want information that I know just hit the the Internet now.

I I trust the perplexity in their search algorithms, through Bing and, the Chrome browser or a Chrome API. I trust that they're gonna give me the information that just landed this morning or that just landed this week, And so I still kinda go there.

I know you can add other elements and things like that to other models and chat bots, but that's still kinda my go to there.

Okay.

When I'm looking for a specific use case and we have a workflow nailed down, that is when I basically hit Google, and I say Hugging Face leaderboard cogeneration.

Exactly.

And I find the leaderboard.

And then I grab top two or three models, and I just start beating the hell out of them, and go from there.

And experiment. Right? So it's not this is not all set in stone kind of stuff. Experiment for what works in your workflow.

I use GPT because I I use it just because I mean, it's but it is, I do prefer using for my own experimentation pretty much all open and free stuff. Yeah. So I have been using Ollama with with, with Llama three point two now. I did see that Sonnet update. That's really impressive, though I haven't experimented with it. I read about it, and that was pretty neat looking.

So moving on, you also mentioned, obviously, if you're not familiar with Python or maybe SQL and things like that, those are things that you should brush up on, though there are low code, no code, you know, methods out there. That is something that is part and parcel of building these things out. So So that's another thing in order to get started, get familiar. And then if we are building out a rag with a vector database, what vector database is free or or paid would you recommend folks take a look at?

Alright. My my first choice for anybody to sit down and start playing with vector database is there's a startup, out of the Bay Area, good friends of mine. They're called Vectera.

Okay?

They are a full Never heard of that.

Rag as a service pipeline vector database.

I'm by no means, you know, paid to promote them or anything like that.

But I learned that if you want something quick, you want something fast and reliable, and this is not something you've done first, it's an amazing service to start with. It makes it stupid simple to get a get a rag pipeline or even a vector database up and going. K. Their free tier will cover almost every use case you have up until a production environment. And then once you hit there, you might wanna start looking at their paid for services. Okay.

They they are a SaaS provider so that you have the value of not having to worry about building your own vector databases and handling all the settings and hosting Yeah. And blah blah blah blah blah. But they do also go above and beyond that where they they offer the full chatbot experience around it where it's just one API call into it to get your inference, get your data back, and then dump that into wherever. I highly recommend checking those guys out. Great smart group of guys. They have their they know what they're doing.

You said Vectera? Of that, do what? You said Vectera? Vectera. Vectera. Okay.

Yeah.

And then outside of that?

My next my next one I'm gonna be looking at is, Qdrant.

Okay. Qdrant.

They're they're basically one of the stronger vector databases on the market.

They've been kinda getting the most attention.

And everybody that I kinda talk to that's been building in this space, that's kinda where they're going.

Okay. Interesting. And have you used neither of those so far?

Those are kind of my first go to to to start with.

Yeah. After that, you know, you you got all the others. You got Honeycomb.

I say all the others, and then I can only name one of them.

Well, I've been my the only two that I've used is Pinecone and Chroma.

Chroma. Yes.

They're fine. Obviously, there's differences between free tier and paid tier, but that's been my only experience with this.

And, you know, it really comes down the the the underlying vector database, you know, there's not too much to it. You you vectorize some chunk of data and turn it into a vector, and then you store it in a table with an index.

That's it. Yeah. That is it.

So from that standpoint, you know, there's there's not too much differentiating factors from any of them.

Where you do start getting into the differentiation is, how how quickly do they return responses, but that's that's pretty much, you know, relatively the same across the board. But, also, what are the features?

Are they giving me a possibility to do reranking, once I do my queries? Or am I gonna have to build my reranking engine on my own?

What other tools do they give me to make my vector retrieval easier?

And and that's that's really where it comes down to with the features.

Right. Right. Then there is, of course, the embedding model to consider if you're not using built in embedding models, which, like, if you're using GPT as your your large large language model, probably using the GPT embedding model, and things like that. So that is a consideration as well. And if you're building everything piecemeal, then certainly you have to make make some decisions there. Again, that's something where I go on to Hugging Face and look at the leaderboard just like you said, and, and then just experiment because I don't really know what the difference is under the hood for a lot of these. Right.

So I look at basically like the reviews when I look up a thing on Amazon, you know, and and that's kind of how I go.

Well, then you you kind of you know, with that one there, that that's a whole other can of worms, and we could probably do an entire podcast on that one. And I still don't fully grasp everything of that whole workflow.

Yeah.

But kinda how I've I've I've come about it to approach it where you can have a good level of confidence in what you're doing.

You just throw your information at Vector database.

Take whatever they say online to chunk your data up and prep it and send it in there. Yeah. And once you get it in there, start working with it. You know, you start querying it.

You start asking. You you you build the workflow to pull it out and get it to work. Just get it to work. And and once it works, then just start throwing every single use case you can at it, like, all the different types of questions you can ask.

And once you hit that point, you're gonna start seeing the problems arise.

You're gonna see that, you know, your chunk sizes might not be optimal, or, you know, you're not getting as rich a data coming out of it. You're you'll start seeing these things, and it's like, okay. How do I start solving it? For me, it's usually like, oh my god. I have no idea. I'm I'm a freaking idiot. I I just need to go back to creating VLANs again.

Welcome to the club. Welcome to the group. Yeah.

But but then you start looking around or or, heck, even jump on and ask perplexity or your best chatbot and ask them, how do I approach this?

And you you'll get pinpointed to, okay. Maybe you need to tweak your embedding model or you maybe you need to work with larger trunks or you need to do this, this, and this. And then you can gradually step through this stuff and start seeing improvements. And and so that's that's kinda my general how how do you approach this time.

Really feels like you're saying if you wanna learn it, you need to do it. Get started. You're gonna you're gonna it's gonna force you to really get into the weeds. You know, another option is to use just like, you know, Azure AI Studio and there's Amazon has what do they call it?

Was it Bedrock? SageMaker. SageMaker? I don't know. Anyway, but you have these tools that are out there that a lot of the stuff is built in and you pick and choose, and it's very, very straightforward. And you can still build something, build something useful. So that's a great way to do it just to get familiar with a lot of the concepts with the with the with the workflow, like we've been talking about.

But, certainly, there is a tremendous value in in building this on your laptop, in your basement with a can of Mountain Dew in a hot pocket and really just plugging away trying to make something simple something simple work. And that's that's my, my personal recommendation, based on my own experience is just to start with something simple. Don't build this this really complex agentic workflow as cool as that sounds, and maybe it's more than data analysis and you have agents pushing config. And, you know, the the the grand scheme of that, the the the pie in the sky idea behind agentic AI is is this autonomous system of decision making and chain of thought and reasoning where it it is, you know, understanding an output and then taking some sort of action.

So there is there is, a really interesting potential future there, but start with just grabbing metrics from, you know, in, from a CSV and being able to query that with human language with you speaking or rather typing in a prompt. Just start with that and, and then build from there.

Yeah.

Let me let me throw one more thing in there, if you don't mind. It's it's a topic I've I've ranted about a little here or there.

But you need when you're doing this stuff, you need to have a way to benchmark your performance.

As you're playing with stuff, as you're shifting stuff in and out, you're not always privy to how you're affecting it, and it can change very, very quickly.

So being able to run a certain set of tests against it, hey. When you ask these questions, do you get this type of answer, yes or no? Does it hit this specific API endpoint, yes or no? And it it really is a set of tests. And every time you make changes, you need to be able to run those tests.

While this is exactly needed for a stability thing, this is one of those that is very much a confidence booster. I've I've run into multiple new teams, multiple new people.

When you get in the thick of this stuff, you know, the the complexity of it and the unknowns really beat down your confidence.

It definitely did me. You know, that that whole thing of I wanna go back and build VLANs, you know, and just work tickets again, it is a hundred percent true.

When you have those benchmarks and you have numbers and you have results that show which way you're headed, it it increases your confidence tremendously.

And so building those in from the very start is is I feel critical. And that's kinda why I wanna make sure it's brought up in this whole conversation.

You know, that's actually really good. Thank you for that. That's not something I was thinking about, but certainly very, very needful. So, yeah.

My my confidence gets shot all the time. Yeah. Every time I go online to research something really, just every time I go online, period, all my feeds are completely inundated with new stuff that's coming out. New papers, new blog posts, new releases.

It's it's kind of overwhelming. And and then when I wanna go experiment with something and try something out, you know, you go watch a YouTube video and the guy is like, thirty seven point six seconds and you can have this up and running. And I'm sitting there struggling with just getting started with step one, you know. But, I mean, the these are folks that do this day in and day out laser focused on the AI component and and making this stuff work.

Whereas Absolutely. I you know, we we have to balance that with how does this apply to IT operations and network operations.

And, so I am very interested to see how this is gonna play out in the coming months and years, how large language models and agentic AI in particular are going to impact network operations. And I'm already starting to see it with several different companies, including my own company, what Kentik is doing, with LLMs.

And how that really augments an engineer and makes, and and makes network operations better. So in any case, this has been great as always, Ryan. I love having you on. Not only are you a good friend and we can just kinda banter about stuff, but I just love what you're doing in the field.

And, you've been a great sounding board. So I appreciate it and then having you on again to get a little bit more in the weeds than the high level stuff, which is something I love to do and hope to do again with you in the future. Maybe we'll do a show on, like, just rag or just just fine tuning or something one day. So for our audience, if you have an idea for a show or you'd like to be a guest on the telemetry now, I'd love to hear from you.

I'd love to talk to you. Reach out at telemetrynow@kentik.com, and, we can start a conversation. So for now, thanks very much for listening. Bye bye.

About Telemetry Now

Do you dread forgetting to use the “add” command on a trunk port? Do you grit your teeth when the coffee maker isn't working, and everyone says, “It’s the network’s fault?” Do you like to blame DNS for everything because you know deep down, in the bottom of your heart, it probably is DNS? Well, you're in the right place! Telemetry Now is the podcast for you! Tune in and let the packets wash over you as host Phil Gervasi and his expert guests talk networking, network engineering and related careers, emerging technologies, and more.
We use cookies to deliver our services.
By using our website, you agree to the use of cookies as described in our Privacy Policy.