More episodes
Telemetry Now  |  Season 2 - Episode 40  |  April 17, 2025

Telemetry News Now

OpenAI Releases GPT 4.1, Cisco and Juniper Partner with Google, Broadcom's AI-Driven Endpoint Security, Amazon's Project Kuiper

Play now

 
In this Telemetry News Now episode, Phil Gervasi and Justin Ryburn break down OpenAI’s release of GPT 4.1 and its real-world use cases, major Google Cloud partnerships with Cisco and Juniper, Broadcom’s predictive security feature, Amazon’s Project Kuiper, and shifting global dynamics in chip manufacturing.

Transcript

Telemetry News Now.

Welcome to another episode of Telemetry News Now. I'm Phil Gervasi joined as always by Justin Ryburn. And, we are recording a couple days out from, for me at least, a long weekend for the Easter holiday. Justin, are you doing anything this weekend? Are you, going anywhere especially?

My daughter has, first communion tomorrow night on Monday, Thursday as part of a Holy Week celebration in our household. So looking forward to that. Yep. Big day in our household. She's growing up. You know?

Yeah. Yeah. Growing up, I have a daughter, my oldest daughter. I have three kids, but my oldest daughter's eighteen, graduating high school this year.

And, growing up is very top of mind for me. You know, there on one hand, I'm really glad because I can see, you know, some amazing improvements in her life and changes, and that's exciting. But on the other hand, I'm like, please stay little. I I don't want you to grow up.

But my, my brother is turning forty on Sunday, so we'll be celebrating both Easter and my brother's fortieth birthday at my mom's house.

There you go. Nice.

So that'll be a lot of fun.

So we are recording on April sixteenth midweek, and we do have some exciting what I think are some exciting headlines to dive into. So let's get started.

From OpenAI's website on April fourteenth you didn't think we were gonna get through an episode without talking about OpenAI, did you? Well, OpenAI releases GPT four dot one in the API.

OpenAI just introduced the g p t four dot one model family. So that includes models like g p t four dot one, four dot one mini, and four dot one nano. And these reportedly have significant performance improvements over previous models like GPT four o, GPT four dot five, specifically encoding instruction following and long context understanding. That's really interesting because they added the ability to process up to one million tokens.

And I I personally think that's a big deal because there have been and are other foundational models out there that have already been operating at that level with very large context windows, whether it was a million tokens or not. And there are potential issues with filling up an entire context window, and then expecting, you know, a completely accurate result. But, nevertheless, the ability to handle that, is now there in this model family. So according to OpenAI, these these models not only offer higher accuracy on industry standard benchmarks, you can go look that up, but also to enable powerful, what they say, powerful real world applications and software engineering, legal document analysis, being, able to enable agent based automation.

Right? That's the hot ticket right now. So, of course. And I personally think it's clear that OpenAI is trying to focus more on on, you know, real world utility not being that kind of generic I say generic in air quotes, that general foundational model.

They're taking developer feedback seriously. A lot of developers I know, will use other competitive models. And, And, of course, you gotta love the naming convention. Right, Justin?

Yeah. If you're using the latest latest model, right, which is four dot five, you will now be upgrading to four dot one, and that makes total sense.

And upgrading that from g p t four dot o I know. Which wasn't o as in zero, o as in the letter o.

Exactly. For Omni. They're they're they're tough with their naming conventions. A couple of things, to wrap up here with this one. The four dot one family of models are available via API only, so you're not gonna log in to ChatGPT and be able to select that. But, you know, for those of us already running some sort of chatbot, you can swap in four dot one to replace whatever you're using now. So that's available, and you're gonna see some some improvements.

And we're also gonna see four dot five slowly deprecated as well, which is very interesting because a lot of the models, they don't get deprecated right away. You know? Like, you could still use four and, you know, assuming four Omni is gonna be around for a while. So it is interesting that they're gonna deprecate that, right away. Not not not exactly sure why.

I saw a couple of things on, like, some YouTube channels I follow where folks had some some commentary there, but I don't know if it's true or not.

Yeah. It's interesting to how much they focused on it the application that has to coding and software engineering. Right?

Yep.

They have an entire benchmark test on their website that shows how it compares to some of their previous models when it comes to coding exercises.

Definitely seems to be a big focus on that as the use case for this, which I found kinda interesting, especially considering it's available at the API. So, presumably, if you have, I don't know, Versus code or something that you're using as a front end, you can configure the API to be able to talk to GPT four. I won't I haven't tried that, Phil. I don't know if you have, but, I presume that's the use case for this.

I agree. The graphs there, which were very minimalistic. I love OpenAI's, like, graphics department. They're, like, very, very simple.

So it's like, yeah, you can get a, you know, read into it what you will. But, you know, they highlighted things like latency and, accuracy, but also cost. So if you are doing a lot of work production work, with these models, which a lot of folks are doing now at enterprise level, you know, cost does become an option. So, yeah, I I agree and see clearly that OpenAI is is, you know, trying to accommodate this developer, life cycle, developer world, and activity.

And I don't know if, four dot one or any models within that family will become available, like, in the ChatGPT interface. If so, you know, that would be interesting to see and to totally experiment with it there as well.

Yeah. The other thing I'll highlight on this then we can move on is they actually released a prompting guide with this one as well to try and help people do good prompts around coding to get the best results back from the model. So that's kind of interesting. So Mhmm.

Yeah. For sure. Absolutely.

So moving on from the Cisco newsroom, on April ninth, Cisco and Google Cloud have deepened their partnership by integrating Cisco's SD WAN with Google's fully managed cloud WAN. You may have seen this one already. So the idea here is that this is gonna enable enterprises, large organizations to securely and efficiently connect on prem data centers with cloud workloads using Google's, global backbone. So, of course, you know, since you're on their backbone, you're gonna see faster performance, hopefully, simplified management, right, and, and then more consistent security across your, in this case, hybrid environments we're talking about most likely. And from the Cisco side, you're gonna be able to use Cisco's cloud on ramp and then, of course, vManage orchestration for your SD WAN.

Now according to Cisco, the move is gonna address growing complexity in enterprise WANs caused by cloud adoption and fragmented security models. And and, Justin, that's something that we talk about at Kentik. Right? You know, the whole idea of lack of observability, visibility, and then inherent management over hybrid environments when you own certain parts of the infrastructure, and then you have providers in the mix and and cloud providers as well.

So, yeah, it's a new partnership clearly about unifying this hybrid network and, you know, addressing the security issues when you have hybrid environment.

But before we move on to, you know, any kind of commentary from the Juniper newsroom also on April nine, I I just wanted to throw this in now, and we can kinda discuss both at the same time.

Last week, Phil, that there's two announcements on the same day from competing vendors about partnership with Google?

Yes. Yes. There was a there was a Google event, which I believe you attended. Right?

No. I wasn't able to make it this year. I usually do. But, unfortunately, I wasn't able to make it this year, but we did have a team there. So I heard good things about it.

Yeah. Yeah. So from their newsroom on April nine, we read that Juniper Networks has expanded its collaboration with Google Cloud to integrate its AI native Mist networking platform with Google's Cloud WAN. Now this is not exactly the same as what Cisco's doing with Google, but, we'll we'll keep going here.

So I think this is interesting and not at all unexpected, of course. Right? So according to Juniper, this would enable rapid deployment of secure high performance campus and branch networks directly from the Google Cloud marketplace. And, yes, we are talking about utilizing Google's, backbone as well.

So the integration is meant to simplify enterprise networking because now you can, you know, access Juniper's wired wireless, NAC firewall, SD WAN services as cloud delivered solutions.

And, you know, like we've just discussed briefly with Cisco, reduce reliance on on prem hardware, address some of those security inconsistencies, and that sort of thing. You know, for me, though, the language of Juniper's announcement had a lot more AI in it compared to Cisco. They explained, they being Juniper, that this this move underscores the growing importance of AI ops in delivering reliable connectivity for Gen AI workloads and, you know, distributed enterprise applications. To what extent this actually addresses complexity and performance demands, right, of modern distributed IT environments, I really don't know because I'm not so sure that every enterprise out there is gonna be deploying those kinds of at scale AI solutions that a lot of network vendors are expecting. You know, we've always had the build versus buy argument in software in general. Right?

Oh, yeah.

And a lot of the time, yeah, you're going to build your own thing. And and and I think, though, in the realm of AI, AI ops, and at the enterprise level, doing it at scale, we're going to see a lot more people tend toward buy than build because the complexity of AI ML pipelines and the level of effort required and the resources, you know, just the data engineering part are very, very difficult. I don't necessarily expect to see every enterprise out there having these distributed inferencing, activities going on. I think they're going to be purchasing. And and maybe there is something to say there if, you know, the solution that you are using is looking at data in one cloud and then, you know, inferencing is done in another cloud. And so, you know, maybe maybe there is something there.

Yeah. I it it's a little hard to tell from Juniper's article from their newsroom, but I wanna say that AI ops language is coming from the fact that it's integrated with Mist, right, which is the basis of their, call it, their SD WAN. Right? They have Yep.

AI built into Mist and have for a long time to do a lot of things in their wireless networks, like finding a radio that's having an issue and migrating all the customers out there. A lot of automation that's based on AI that has been built into the missed product for a long time, which is why Juniper product that, why it has such a great reputation in the industry. And Juniper has been rolling that out more across their enterprise switching, line in their campus and branch, which is the BU that did this announcement, by the way, is their campus and branch BU. So Mhmm.

Presumably, that's the the WAN solution that they're integrating with Google's Cloud WAN. So that's why there's so much AI language in there. It's not so much that the AI workloads are running over the top of the the network that they're built as much as they're leveraging AI to help make the routing and how the traffic is flowing Mhmm. On the joints the joint network that they're building here more performant.

Mhmm. Yeah. I'm glad you brought that up because that leads into the next headline, this idea that AI is more than an LLM rapper. Right? That there's there is some other type of activity that's been going on for literally decades under the the banner of AI that we don't seem to talk about anymore, but it's still critical.

So I'm gonna move right on to it, and I'll see you'll see what I mean from Broadcom. Right? From the Broadcom product news, room news release, whatever they call it, dated April fifteenth. So this is their website.

Broadcom has just introduced incident prediction, an AI powered security feature for semantic endpoint security complete. That's S E S C for those of you that, you know, like acronyms or initialisms, whatever that is. And it's designed to predict and disrupt cyberattacks, especially an attack technique called living off the land, which if you're not familiar with living off the land is when you have some sort of an attack vector where the bad guy comes in and then utilizes whatever installed resources exist already, you know, WMI, or, you know, whatever applications you have already and utilizes those as their method of attack.

So this feature, incident prediction, was trained on over five hundred thousand real world attack chains, and it's supposed to be able to proactively forecast attacker behavior. And that's also very important. It you know? And what's also very important, excuse me, is that it can automatically then mitigate the threat without interrupting business operations or or as Broadcom puts it, overburdening SOC analysts.

I like this this approach because it's this idea of intelligent real time defense. It, you know, reduces the need for blunt responses like, quarantining devices, locking out users, and all that kind of thing. But I also think, like I was alluding to just a moment ago, that this is a compelling example of how AI is moving from, you know, detection to full on tactical defense and that it's not an announcement of a chatbot or the latest LLM wrapper, you know, around some other feature, which, by the way, I think those are awesome and very cool and and great. That's fine. But this is acknowledging the whole world of predictive analytics that has so much just tremendous value in what we do in IT operations and in this case, in security operations.

So, I think this is a really interesting announcement from Broadcom. You know, from a technical perspective, at least for me, it's very fascinating.

Yeah. I had never heard of living off the land attack, so this was really interesting to learn learn something new about, about that type of an attack. And, yeah, I think both you and I, Phil, have sat through industry talks where where people talked about how many attack vectors AI brings because it makes bad actors, makes the people that we're trying to stop, more powerful because they can use AI to be more advanced than their attacks, whether it's using audio recordings to make it sound like it's someone's voice on the other end doing social engineering type of attacks, phishing attacks, or, you know, other type of attack vector.

So most of the focus of talks that I've seen is on how AI can be used as an attack vector. It's good to see finally some news coming out about how we can actually use it to stop different types of attacks and being able to use the predictions that AI can bring and what the next word's gonna be to be able to see that this is an attack vector and not legitimate user behavior. So Mhmm. Yeah.

Really fascinating.

Yeah. It is. And and I don't think it's new. You know? I mean, there have been especially in the security world, there have been folks using some of these techniques, like traditional clustering algorithms and regression, you know, time series family model kind of things, where they can identify, events that go together that are indicative of of an attack.

They can compare activity to, like, a a current threat feed or a CD, things like that. So that kind of activity has been going on for a while, sometimes homegrown. Sometimes it's just part of your sim and and not really utilizing.

The desire there is to reduce the number of false positives because that like, level one SOC analysts are chasing every little thing that they see, and then, you know, eighty percent, ninety percent are nothing to chase. So how do you reduce that if you can add intelligence where you can really accurately identify what's something that's real, what's something that's not real, or what's something that needs to be addressed that's a concern, that's really cool. And and I'm glad, you know, I'm glad to see that, you know, that's an interesting thing. I've been really, really interested in the realm of ML and AI for some years.

And, and then, of course, I jumped on this bandwagon with LLMs the past two years as well because it was the latest manifestation of that. And so that's cool. But I wonder if this excitement around large language models the past couple years and now getting into agents and things like that is actually giving like, reawakening our industry IT, but everyone in general, reawakening folks to the value of, like, traditional AI operations, like predictive analytics forecasting, you know, identifying correlations and patterns and things like that, stuff that, again, we've been doing for years and years and years.

And it's, you know, stuff that you can you know, if you have the data, you can start building an ML ops, you know, pipeline right now and and start looking at data in your own infrastructure. And so, you know, it kinda went under the radar for a long time. And now folks are like, oh, we can do this. Well, we've always been able to, but this, like, recent buzz around large language models helped reawaken that.

I don't know. That's just something I've been thinking about.

You know, I talked to a lot of people who were really excited about networking for the first time.

Networking has gotten boring. I think you brought this up on one of the Telemetry Now podcast recently that, you know, you can only talk about BGP or the latest routing protocol or some new TLV that's been added into the new routing protocol so many times where it's like, yeah, I'm kinda tired talking about this. Right? So I think if nothing else, AI is bringing some new exciting things to networking that's interesting to talk about for the first time in quite a while.

So I think that's why a lot of people are excited. It's like, I was talking to somebody at the AutoCon conference last fall, and they were saying that, like, they feel like a kid again. Right? Like, they could wake up every morning excited to, like, play with this new latest model and try and apply it in this new different way.

There's just a lot of new cool fun things to learn, and I think that's why, you know, there's a lot of excitement around this.

And Mhmm.

The other thing that came to mind as you were talking about how long this has been around for specifically for security is one of the challenges that security practitioners have is they wanna be a little bit careful in how much they share about how they're blocking attacks and how they're protecting themselves.

Because the more that the bad actors find out about how they're protecting themselves, the more they can turn that against them. Right? And so we, as, you know, networking professionals tend to think in community and sharing best practices and sharing everything we possibly know with each other. But, you know, security practitioners have to be a little bit more careful than that because it could be turned against them.

So you're right. There probably has been a lot more, AI used in defense Mhmm. In security defense for a long time. We it just doesn't get talked about much because people don't share their best can't share their best practices.

Mhmm. Yeah. Good point. And, you know, one thing to think of or or to consider here is that everything we've been talking about within the realm of IT and then and, in this case, security operations, is operations focused.

It really is. You know, it's not like in marketing where you're trying to predict a churn rate or, what visual ad will people most likely click on or or troop movement, you know, of nation state military kind of things. We're talking about how do I improve operations, reduce the amount of time it is to wade through all these new events and tickets that come in, you know, into my sock? Or, how do I predict which SFPs are gonna fail in my data center more easily rather than just, you know, wait for them to fail?

So there's it's all operationally focused. And that's that's interesting meant to improve, like, the life of a network engineer.

And, that makes a lot of sense. That's why I see so much value. And, you know, in my opinion, that's one of the reasons I'm so interested in this field in, in the past few years for sure.

Yep.

Alright. Moving on to the next article from PC Magazine titled Amazon wants to build tens of millions of I believe they're pronounced Kuiper. It's k u i p e r. I'm gonna presume it's pronounced.

I think that's Kuiper, like the Kuiper Belt.

Kuiper. Okay. Cool. Alright. Kuiper dishes to compete with Starlink. So all of the media and all of the hype recently has been around Starlink, and they are doing a fantastic job.

No doubt. We've talked about it a number of times on the podcast. Some of the the underserved markets they're going into to help them get high speed Internet that have never been able to a lot of times because of the terrain.

But there are other companies in that game. I know you had a recent podcast talking about satellite Internet and low Earth and medium Earth orbit, and there are various different operators that are out there actually beyond just, Starlink. And, of course, Amazon's got their own thing, this Kuiper that we're talking about where they wanna send up tens of millions of satellites to build a full mesh similar to what SpaceX has to be able to compete with Starlink Internet. So, you know, so far, they're playing a little bit of catch up.

They're behind Starlink as far as the number of satellites that they have, in the sky. But they're working hard on manufacturing these new satellites, and then they're obviously gonna have to do some launches to get these into, into the sky so that they can provide this service. So, Yeah. Jeff Bezos, who's still, obviously affiliated with, Amazon, has his Blue Origin space company.

So, presumably, they'll be the one that's gonna talk about in the article, but, presumably, they'll be the ones that'll be helping Amazon put these Project Kuiper satellites up into the sky.

Yeah. There's some hurdles here. There's tight timelines. There's logistical hurdles. I mean, they're building everything in house, so there's that.

Maybe that's not a hurdle. Maybe that's a a good thing. But according to the FCC, there's a mandate that they have to launch half of their thirty two hundred satellites by, mid next year, mid twenty twenty six. So there is, there's a scale.

There's a timeline urgency behind this investment. And, I mean, that also signals that Amazon is very serious about being a contender in the satellite Internet market. Is that what it's called? I don't know.

But I agree with you. Catching up with Starlink is not gonna be easy. I mean, we're talking about anticipating launches of the first production satellites.

Well, and, like, even, Amazon was prepared to launch its first production satellite in orbit this past Wednesday, and it had to get or maybe it was today. I'm not it's not clear from the article which Wednesday they're talking about, but they had to scrub that launch due to the weather. Right? So that's the kind of challenges and logistics that you deal with when you're trying to build an Internet based I'm sorry.

Space based Internet company like Starlink or Kuiper is that you gotta continually deal with changes in the weather and being being able to do your launches, not being able to do your launches, rockets that launches that don't go the way you expect and so forth. So it's not it's not easy to get these things up in the sky. And even once you get them in the sky, your late recent podcast I found really fascinating where you're talking about all the challenges and keeping them in the right lines so they can communicate with each other, not running into each other, not running into other space debris that's up there.

There's all kinds of challenges that we, who are normal terrestrial network engineers that have cables in the ground, don't really think about.

Yeah. Speaking of the ground, there's also the initiative to build the ground based stations, the gateway stations, right, which, you know, you sort of need, you know, for the satellites to communicate with to get the signal back and then propagate it out, here on Earth. So there's there's a lot of activity there. And you mentioned, you know, rocket failure.

We're not talking about, like, Starlink using SpaceX rocket systems. It's a different animal, that Amazon is dealing with. So, yeah, catching up with Starlink is not gonna be easy. That's for sure.

Alright. Moving right along, we actually have three articles related to NVIDIA that we'll talk about here. The first one is actually from NVIDIA's blog site itself, talking about how they're going to be, moving some of their manufacturing of their AI chips of their silicon back to the US for the first time. So I think we've talked about it previously on the podcast, but NVIDIA doesn't actually manufacture their own chips. They don't actually own the factories that manufacture their chips.

They design them, and then they outsource the actual manufacturing of them to a number of different vendors, which their business model is they build manufacturing plants and do all the tooling to be able to build all the chips. And the primary company that they use for their Blackwell chip, which is what they're planning on bringing back to the US, is a company called TSMC, Taiwan Semiconductor. Mhmm. Taiwan Semiconductor is building a plant in Phoenix, Arizona with plans to build some additional ones in Texas.

Once those facilities are built, then they'll be able to manufacture some of these, AI chips back in the US. They also have partnerships, which I wasn't aware of until reading this article, with a company named Foxconn, which is a company we used to manufacture some of our silicon when I was at Juniper as well. But they're building, facilities in, Houston and in Dallas. So, obviously, NVIDIA is investing heavily in trying to come up with other places to manufacture their chips beyond China due to the recent tariffs, which, you know, is what the tariffs are intended to do is cause companies to try and shift more of their manufacturing to places where the US has a little more friendly trade agreements.

So Yeah.

I mean, the CEO of NVIDIA, Jensen Huang, commented adding American manufacturing helps us better meet the incredible and growing demand for AI chips and supercomputers, strengthening our supply chain and or strengthens our supply chain and boost our resiliency.

So, you know, that's a that's a very corporate answer. Right?

Well, I think, you know, one thing that's kind of teased out in there a little bit is even if it weren't for the tariffs and even if they weren't having to shift around where things are being manufactured to lower their costs when tariffs included, We know there's a lot of demand for AI chips. Right? So they're having to scale up and, manufacture more chips in more places just to deal with the demand. Right? So question would be, even if even if the tariffs weren't in place, how much of this would they be doing?

Might not necessarily be on US soil, but I bet they'd still be working with TSMC and Foxconn, some of these other vendors to be able to scale up just because they got to be able to meet the demand.

Right? Yep. So the next article is actually from Reuters talking about how NVIDIA is actually facing a five point five billion charge as US restricts sales to China. And so I also found a Yahoo Finance article talking about the same thing. Essentially, the, US has banned the export of the h twenty chips that NVIDIA manufacturers. So it's just specifically that chip, but that is, one of its most popular chips that Chinese companies were importing into China. NVIDIA was exporting from the US, and so they've been barred from exporting those out of the United States.

Once this news broke on this overnight, NVIDIA stock fell five percent in premarket trading on today, Wednesday, April sixteenth. So Yep. That's where the five point five billion dollar number is coming from. That's what they expect the impact to that revenue to be from those chips having been banned from being exported from the US to China. So, you know, they're obviously gone back to the previous article. They're they're gonna need to staff up or to, scale up the US and other places where they can manufacture these chips pretty quickly.

Yeah. And those chips were interestingly designed initially to comply with other US regulations that we've had in place and and was, you know, commonly used, especially, you know, by Chinese, AI companies, you know, Tencent, Alibaba, ByteDance. Well, Alibaba is not Chinese. Is it? Actually, yes. It is.

What am I That is.

Yeah.

But, anyway, used for AI inference workloads. So it's not necessarily like Nvidia's highest end chip, but, certainly, that's where the concern is. They're like, wait a minute. This does fall in the line of, like, supercomputers and therefore triggering the, the export ban.

So yeah. Yeah. This is, you know, the ongoing efforts of the US to limit China's access to, this kind of advanced technology, specifically computer technology related to AI, I guess, in the name of national security and the name name of trade deficits. Right?

Yeah. It's hard to keep up with all this. I mean, you know, as we as we know, there's I think it's a hundred and twenty five percent now, reciprocal tariffs that the US has put on any imports from China, and then China's done the same for any imports from the US to China. And then, I think it was either late last week or early this week, president Trump announced they were going to pause that for, Philip and chips and so forth, which really helped Apple and NVIDIA companies who actually manufacture their chips in China and import them into the US.

But then shortly after that, it was announced that there will be new tariffs specifically to that coming. So we're waiting to kind of hear what that's going to look like. So I feel for the CEOs at some of these tech companies. It's an evolving situation every day.

It seems like there's a new announcement they've got to figure out how to deal with and how to risk their business.

To, like, existing deployments, you know, like, that Tencent and DeepSeek already have. Does it affect that at all?

Well, presumably, if they already have deployments, they already have the chips running in their network. It doesn't impact that as much, but, you know, they're gonna have to scale up. Right? They're gonna have to continue to add new, you know, new chips and new servers as they scale up and build bigger and bigger models or have more demand for the existing models. So, you know, it really only impacts them if they're buying new chips, new servers. It doesn't really as far as I know, from everything I agree, it doesn't really impact existing infrastructure that you've already acquired.

Yeah. I mean, it is amazing how the geopolitical tensions of our day are literally one of the main maybe it's always been like this, and I'm just naive. But they're literally reshaping the AI landscape, in this case, from a hardware perspective. You know, if we're talking about export policy being now a major driver of market dynamics when it comes to this, corporate strategy, right, and and presumably even development.

And so it's very, I mean, I think it always has been on some level.

It just didn't didn't move as fast as it seems like it has since, you know, the the Trump administration took over. Right? It just seems like it's moving at a at a speed and a scale that we've never seen before. It's, like, changing on a daily basis whereas normally policy takes a while.

Right? It gets discussed in one house of, you know, either the house or the senate, and then it goes through, it gets approved, it gets signed by the president like that. It doesn't move that quickly. And so if you're, you know, a CEO that's a company is going to be affected by new legislation that's going through the normal process.

You've got some time to work with your team and your board and figure out how that's going to impact you and what you're going to do about it. These have been moving so quickly through the process and changing so quickly. Mhmm. It's gotta be really difficult for companies like Nvidia and Apple to to deal with the constant changes, but they definitely seem to be doing the best they can with it.

So Yeah.

And that's the price of our own business. You know, that's what it is. So let's move on to upcoming events. On April seventeenth, which is probably the day you're listening to this particular episode, is the Massachusetts Networking User Group, part of the US anyway, and that's in Framingham, Massachusetts just outside Boston.

And, I will be on the panel for that one, so I'll be driving out there, which is a a lovely drive across Massachusetts from upstate New York where I am. So I'm looking forward to that tomorrow and seeing some friends.

We have AI infrastructure field day, part of the, field day organization, tech field day, and, that is on April twenty third through twenty fifth, live stream. Then, of course, you can see them on their website, YouTube channel, and in the subsequent blog posts and commentary.

The Virginia Networking User Group, also part of the USNUA, is April twenty fourth. And Justin and I will both be attending that. That's the one that's led by our good friend Scott Robon, friend of Kentik, friend of the show, and, friend of of Justin and mine as well. So we appreciate that and look forward to to seeing each other there. On, May five through seven, Justin will be attending knowledge twenty twenty five, which is ServiceNow's user conference. Justin, can you give me a little color about that one? Because I don't know too much about that event.

Yeah. This is you kinda think it is like, the Google Cloud Next that Google had last week. It's similar for ServiceNow and for ServiceNow's product. It's their annual user conference. They'll have all their vendors there talking about the various integrations that their users might benefit from, and that's why I'll I'll be attending Kentik has an integration with ServiceNow. So we'll be there talking to some of our joint customers about what the work we're doing with ServiceNow to improve our integration, with them. So looking forward to that one.

Alright.

Next, we have the Colorado Networking User Group on May eighth. Justin's gonna be on the panel for that one. And then there's the Chicago NOG, China. That's not the USNUA.

That's the different organization. That's on May fifteenth. Justin will be attending that one. And, we're gonna stop it with a month out, May fifteenth.

Certainly, there are some exciting events coming up in the end of May that I can't wait to talk about, but we're gonna hold off on it for now.

So until next time, thanks so much for listening. Bye bye.

About Telemetry Now

Do you dread forgetting to use the “add” command on a trunk port? Do you grit your teeth when the coffee maker isn't working, and everyone says, “It’s the network’s fault?” Do you like to blame DNS for everything because you know deep down, in the bottom of your heart, it probably is DNS? Well, you're in the right place! Telemetry Now is the podcast for you! Tune in and let the packets wash over you as host Phil Gervasi and his expert guests talk networking, network engineering and related careers, emerging technologies, and more.
We use cookies to deliver our services.
By using our website, you agree to the use of cookies as described in our Privacy Policy.