Telemetry News Now.
Welcome to another Telemetry News Now. I'm your host Philip Gervasi joined by Justin Ryburn. And returning to us again is Leon Adato, who was not with us for the past two episodes. We're so grateful that you're back with us to hear your, beautiful, commentary, beautiful commentary, your insightful commentary. I mean, it could be beautiful as well.
I don't know. I'll try to make it as lovely as possible.
That's the first word that came to my mind when I thought of you, Leon. So take it for one of you.
There you go. I am I am luminously attractive to all genders.
Right. So let's, let's dive into the headlines.
I wanna remind everyone that Phil has pointed out more than once, we have no desire for Telemetry News Now to become nothing more than a collection of hacks, vulnerabilities, and breaches, but this is a news roundup. And, unfortunately, those things make up a big part of our news cycle these days, which is why I'm starting with a Network World article that reported that Cisco IoT wireless access points have been hit by a severe command injection flaw. For those who track these things, this is CVE two zero two four two zero four one eight, and it affects three products, the Catalyst IW nine one six five d heavy duty access point, the Catalyst IW nine one six five e rugged access point and the wireless clients that go with it, and third, the catalyst I w nine one six seven e heavy duty access wireless points.
Now a successful exploit could allow the attacker to execute arbitrary commands with root privilege on the underlying OS of the affected device. However, it's important to listen to this part. These devices are only vulnerable if they're running vulnerable software in URWB mode, which is ultra reliable wireless backhaul. The article that we've linked in the show notes tells you, first of all, how to check and see if you're in that situation and also how to secure your environment.
So there we are. Another hack.
Yeah. I mean, the interesting thing about this vulnerability is that it has been exploited in the wild, so it's not just a theoretical thing. It looks like, attackers actually have exploited it in some companies' environments, so definitely wanna get out and patch this one if that's a possibility depending on how long your, change window cycle takes, maybe, checking to see how vulnerable you are. Like Leon mentioned, I think the the exact command for those is, show m p l s config. Like Leon said, we'll put that in the show notes. But show m p l s dash config is the command you have to run to see whether your access points are operating in the ultra reliable wireless black hole mode and are vulnerable to this. Otherwise, you wanna patch quickly so that, you don't become one of the victims here.
Yeah. I think one of the only saving grace is that we are talking about rugged, devices. And so they have a much smaller footprint than, some of the other AP models, that you're gonna see deployed, in huge numbers in in campuses and organizations. Yes. I have deployed rugged APs in the past, but far, far fewer than your standard models. That's probably the only the only positive here is that the, the the footprint, the blast radius is is smaller than it could have been if it was the, the more commonly deployed models.
Yep. It's, you know, it it's just one of those facts of life of IT of the IT world. You have to stay up on the news. Thank you for tuning in. And, also, you have to keep up with all the patches and the defensive measures.
But enough about hacks, at least for the moment. We'll see if there's more coming. Next up is an article from TechCrunch, which reports that Signal, the messaging app, has gotten new video call features, making it a viable alternative, I'm doing air quotes, to Zoom, Meet, and Teams.
The just as a summary, the popular messaging app has a heavy emphasis on privacy, but is adding a series of enhancements clearly designed to, I guess, try to compete with corporate meeting applications.
Signals had an encrypted video call option since twenty twenty.
But what's new here is that instead of one person creating a group and then initiating a call to that group, you can now just share a link with anyone, and then everyone can jump on assuming, of course, that they have signal. Now I'm I'm a big signal user, and I enjoy it. I am not certain that I would abandon more traditional, corporate meeting applications for signal, but, you know, it's it's twenty twenty four, almost twenty twenty five. Anything can happen.
Yeah. I mean, I think, this smells to me a little bit like shadow IT. Right? I think most organizations are unlikely to embrace signal over whatever they've already adopted as far as Zoom, Teams, or or Google Meet, which all, by the way, do encryption.
I mean, I I agree with you, Leon. I like Signal. I like their privacy stance and some of the encryption of things they do. But, some of the other corporate meeting tools that organizations have at their disposal do also do that encryption.
So I'm not sure I'll see we'll see a lot of organizations rushing to switch over to signal anytime soon. It sounds like something that small groups of people knew and then a certain team adopt, but not something that becomes the corporate standard for a lot of companies.
I agree. I mean, Zoom is fantastic, and I love Signal as well. I'm a big Signal user as well, like you said, Leon. But it is not going to be a replacement in the short term for Zoom, which is significantly more full featured for a collaboration application, for video chat and everything else.
So I I agree with you guys that it's cool. Right? Small group, is where I see it as well, Justin. I agree with you.
Alright. We must be off our game this week on Telemetry News Now because this is the third article in, and we're just getting around to talking about AI. But, the next article is from Network World. It's clearly talking about AI, and Arista announced their financials, this past week. Offers us a little glimpse into the, AI world here, that being not so much models and new AI software development, but actually the hardware and the underlying networking infrastructure that powers a lot of these AI data centers that are being, being built. The article describes how the the building of AI data centers with, distributed GPU clusters is driving sales for, Arista switching. We talked a couple episodes ago about how AI is building Arista features into cloud vision.
But this article is really, like I said, more focused on the switching sales that are that are driving Arista's revenue.
The article states that Arista has between ten and fifteen classic enterprise accounts that are trialing AI networks, but they have a very low number of GPUs involved in those pilots. That's compared to five hyperscaler trials, which Arista expects to grow to a hundred thousand GPUs per cluster and even more.
You know, I've I've been saying for a while that while a lot of the early data centers were built using InfiniBand because that's the standard that NVIDIA supports and they're the eight hundred pound gorilla in the GPU market, I wouldn't bet against Ethernet. Ethernet's been around for a long time. It's one out of bounds, against a lot of different networking technologies. And I think over the longer term, as we start to see more distributed GPU clusters being built out, we'll start to see more and more Ethernet being built.
And I think that's one of the things this article is pointing to is that the rest is starting to get traction in these environments. They're starting to see a a growth in Ethernet. Interestingly, the CEO, Jay Sheree Ullal, noted that a lot of these trials are actually using four hundred gig Ethernet.
The eight hundred gig ecosystem is just not there yet, but, she does expect that they'll see a lot more of their trials leveraging eight hundred gig, moving into twenty twenty five. So that'll be interesting to see if the Ultra Ethernet consortium, rolls out their eight hundred gig stuff.
So I will attribute the fact that this is our first AI article of the week because I was doing the first two, and there's just no way. We all know that I like to gag on AI news.
However, I will say this makes a lot of sense to me. Like, of of all things I can bag on, this is not one of them. And I really appreciate two aspects of the article. First of all, the Arista is investing time and money to make AI training and development faster and more efficient on the network side.
Like, they really are not just throwing heat and cooling and copper and things. Like, yeah, just make it go faster. You know, they're really trying to take the existing infrastructure and make it more efficient. I appreciate that.
And also that they're willing to talk about it during the trial phase, so it's not the usual, we discovered cold fusion. Send us buckets of cash now. You know, like, it's not that kind of hype. They're just saying, hey.
This is what we're doing. This is where we're at with it, and we're excited. We expect good things, but who knows? So I really did appreciate the article for for what it was.
Yeah. Arista does seem to be at the forefront of this very niche type of data center networking. And, you know, data center networking is is kind of the foundation of what Arista is all about, and I I know that they had some announcements recently about, like, some efforts to move into the campus. But, I mean, really, when I think of Arista, I do think of of, data center networking. And, I think it's important to right size their type of customer and what this is all about in our mind's eye because we're not talking about, like, Bob's Pet Store down the street or, or or more or more accurately, like, maybe a very large school district in your area, that has, like, a server room with a bunch of racks. No. We're talking about a very small set of a subset of customers that are building artificial intelligence workloads.
Maybe that's an AI company outright or if it's or it's a very large enterprise that's willing to invest the time and money. And that requires very, very niche networking, four hundred gig, eight hundred gig, and and more speeds and feeds and, you know, mechanisms like schedule fabrics and other things to reduce job completion time, from you know, so you're not training a model for eighteen months, and you can bring that down to six months, make it more viable.
Not to mention that, you know Justin, I think you and I recorded a LinkedIn live a while back, like, a year ago when we're talking about ten thousand GPUs and and then, like, thirty two thousand GPUs, which is huge. And now it's, like, it's boggles it boggles my mind that now we're talking about a hundred thousand GPUs in these in these custom data centers to, to to build and train these AI workloads. So another thing is that I do appreciate that Arista they've always been I mean, they are a for profit company, but I I do appreciate that they've always been kind of open, with their platform and with working with others. And, and and the same in this case, working with the UEC, the Ultra Ethernet Consortium to, to move the needle needle forward in adopting Ethernet, instead of InfiniBand and adopting other standards. You know, this this is a tiny little group of of folks out there that are building these data centers and and training models.
But don't, you know, don't let that fool you. That that's a small group that's going to impact the entire, you know, IT landscape in the entire world. And I think we already see that, of course.
Mhmm.
Alright. Moving on. Sticking with the AI theme, the next article is from Reuters titled Juniper Networks invests in AI startup Recogni, I believe is how it's pronounced, in a hundred and two million funding round.
The article goes on to say that Recogni is a startup focused on being an infrastructure supplier in the data center providing the compute that's needed to run the largest generative AI multimodal models.
I personally had not heard of Recogni before this article, but, there's so much investment in AI right now that's actually not really surprising that I wouldn't have have heard of them. There's a lot of money going into this area. What I found interesting about this one, is that this company designs their own Philip and then works with, TSMC, Taiwan Semiconductor Manufacturing Company, to produce them.
That's what's called a fabless chip design where the company that designs the chips, in this case, Recogni, does not own their own fabrication and manufacturing Philip. So they're outsourcing the fabrication and manufacturing. They're just doing the chip design. They're actually, presumably, building GPUs or can gonna compete with the likes of AMD and Intel and NVIDIA. It's, you know, we all know NVIDIA's got a huge lead in this market, so we have to see if they can catch up. That's gonna be the market they're going after, and Juniper's investing, to help help fund that.
Yeah. Now see, this is this is more of the typical AI stuff that I like to ignore and sometimes make fun of and be snarky about. The most interesting part of this article was the fabless chip design part that that we've gotten to this point relatively quickly in the AI cycle. You know, this this is not new.
This is not unique. We've seen this on everything from pocket calculators to PCs, you know, where some organization designs a better chip but doesn't wanna have to build it. And so the the surprising part is that we're here with AI already, that we it hasn't taken, you know, more years, I guess, to to get to this part. But otherwise, you're like, okay, AI.
Here we go. I'll let someone else have the rest of my time.
Well, this is Juniper investing in an AI startup, and Juniper's a networking company. So that's that's interesting to me. What are they investing in here? They're investing in a company, a lot of money, by the way, you know, that's that's building this infrastructure to do multimodal, foundational model size AI workloads.
So we're talking about large language models in the truest sense, as in, like, large language models as opposed to, like, Mistral or Lama that's in the hundreds of millions or several billion parameters. We're talking about found foundational models like Claude, GPT, things like that. So this is not messing around. This is not dipping your toe in the pool.
This is getting into the realm of commodity LLMs that folks are going to use as a service. And when I say folks, I mean, other companies building their own products around it, like a Google or a Meta.
And and and, again, also having this investment from a networking company. Not exactly sure how to interpret that other than to say that, you know, based on the previous article, there are data centers that run this stuff. So there's a vested interest there as well to see that kind of growth.
I don't know if there's some ulterior motive as well, and that's that's interesting to me because I I don't believe that all of it is marketing hype. I think we are in the trough of disillusionment right now with generative AI or at least with large language models in general and this whole AI conversation.
So I do believe that we are now looking out onto the horizon where we're gonna start to see that what is what's the next stage? Normalization.
No.
Breathe, anger, tensions. No. No. No.
I mean, I mean, in the Gartner right side, what's going on.
Step recovery plan on the other side.
Oh, right. Yeah.
Like, when you're out of that trough of disillusionment, you go, I I can Google it or you can Google it. But it it's like that whole normalization where we start to realize what are the real use cases when we start and and people stop talking about it. And I think that's where we're headed very soon, as we start to develop actual use cases. So yeah. The slope of enlightenment is actually tropical. Of enlightenment. We are we are enlightened.
Your Google is faster than mine today.
You know, what I think is interesting about this being a networking company, Juniper, that's investing in this is, are they trying to compete with NVIDIA? Right? Like, you go back and you look at, I think I mentioned on one of one of the more recent ones we did where, NVIDIA is not only building GPUs, they have the networking, they have software to train the GPUs and get them provisioned and ready to go. So they're building an entire ecosystem around their their GPUs.
Right? And that's, in this case, InfiniBand based, which blocks the likes of Arista and Juniper out from those environments because they don't sell InfiniBand. They sell Ethernet. Right?
So Arista and Juniper are obviously investing heavily in building out these Ethernet environments. And if what they're gonna need is GPUs that play nice with the Ethernet and presumably the software and everything else that goes on training those, goes along with it, that strategy kinda makes sense to me that that's their play here. Right?
I also wonder I mean, again, snarky, curmudgeonly ness aside, I I also wonder if Juniper may be thinking that they're gonna build an LLM that is purely network centric, that what we're seeing is people investing.
And again, Phil, to your point, this is, you know, a large, capital l a r g e, large language model of network that is network centric or network focused that they then want other companies that care about the network to build off of in the same way that other tools build off of Chat, Jippity, and and the rest.
I just it's it's a thought of mine that you would invest this much if you were this deep into the network and wanted to capitalize on all the already sunk capital of what you know about the network and what you can know about the network because you have all these devices.
That's interesting. Interesting. Interesting. You know, I I to design a large language model specifically for networking, that makes a lot of sense.
Now the thing is that there are a lot of folks, myself included, well, to a very small extent, that are working on using very small language models, like several several hundred million parameters or seven billion parameters. Think of something like a llama, or or a mistral, something like that. Training a much smaller model, because it's lower overhead, on your very specific domain data and then using it in conjunction with other things like RAG, of course, and then, you know, tweaking it so you get very accurate results from your RAG system, RAFT, which is a combination of RAG and fine tuning and, and and other mechanisms as well so that you are getting very accurate domain results, right, or accurate results about your domain, networking in this case, and more specifically your network, by but by using a very small model.
And so that way, there's way less overhead and effort to get there.
But, you know, I I never thought about that, Leon, until just now when you mentioned it. You know, could it be that there is a market to develop a network domain specific model that is ready to go for ninety percent of folks out there because most of networking is kinda like the same? And then you just need that last ten percent, you know, onboarding a customer for your, you know, your your product, your, AI for networking as a service product, that last ten percent to get you there so it's now ready for your specific network environment.
That would be interesting. I I have no idea if that's what Juniper is doing, but, it's an interesting thing to think to to think about here. So moving on, article in the register from November eleventh. But if you Google it, there's articles all over the place today, because we had some big news out of Cisco. You may have heard of them, small network company out of Silicon Valley.
Cisco recently unveiled in Australia, actually, two Wi Fi seven access points, the c w, nine one seven six and the c w nine one seven eight.
That's alongside a new licensing management of wireless devices across, your cloud, your on prem, your hybrid networks, all of it together, and, and incorporate both Cisco and Meraki management and security features.
So, you know, in previous days, and I remember when I was turning a virtual and physical wrench, you would have to integrate those manually yourself if you're running both the Cisco environment traditional Cisco environment and a traditional Meraki environment. Even after the acquisition, they were still pretty siloed for quite a while.
And so, I mean, obviously, integrating these two the way that they are now creates a much more global access point model that can maybe automatically adjust for regional requirements. As you know, with, like, radio frequency stuff, you have to take your geography into account and your country into account.
And then also, like, seamless switching between cloud and on prem management. Right? If you're running controllers in the cloud or some sort of as a service or you're on prem stuff for whatever reasons. So So let's talk about the features. Some of the key features of these new APs are built in you don't see my hands right now, but I'm using heavy air quotes, AI enhanced capabilities.
And so that would be something like Cisco's AI RRM for RF tuning as well as advanced security measures such as AI native device profiling.
So if you're not familiar with RRM, right, that's, the radio resource management, and this is, very nostalgic for me back in my wireless days. It stands for the radio resource management, and it's a way for the system, like your wireless controller, to manage your APs and specifically manage dynamically. Right? It changes the radio settings and all the parameters on your APs because it's optimizing radio propagation, the RF propagation in that space based on whatever information it's gathering from, usually, clients and then other APs in the vicinity. Right?
And, and it's doing that to optimize performance and optimize radio frequency propagation, all that. So the AI part here in AIRRM is that this whole thing is offloaded to some sort of AI engine. I don't know what that means exactly. That also uses historical data, and I have to assume some sort of more sophisticated data analysis to make decisions about how to optimize RF settings.
That's a lot of interesting stuff there because, you know, we're starting to add these APs also add, a greater capacity, have way more throughput. I mean, they're talking about speeds exceeding forty gigabits per second, which I assume is lab tested. I can't I can't imagine that, you know, out in the wild just yet.
Better latency, all of that stuff. So you you do need, especially in high density in volume environments, stadiums, gymnasiums, schools, colleges, that kind of thing, you're gonna need that that type of really advanced RRM. I don't really know. A lot of cool stuff.
But wait. There's more. They also are equipped with GPS. That's pretty cool. BLE, which is nothing new.
That's, what is BLE? Bluetooth low energy, ultra wideband capabilities.
A lot of and oh, and, of course, they're, they're gonna be supporting or they're gonna support SD Access. So if you're running SD Access, an SDA environment, you know, that kind of campus overlay, you can integrate it into that. So that's all the AP stuff. Now as far as the licensing, Cisco's new model, Cisco networking subscription, it's supposed to streamline licensing by bundling software hardware and support into unified essentials and advantage packages.
I mean, supposedly, that's gonna help align renewal, with your budget cycles and, you know, you co term things more easily.
I I don't know. The AP stuff was interesting to me. I'm all about that. I have seen so many new licensing models rolled out from Cisco over the years, every other year, that you have to relearn. And I watch all of the videos and and and get all the presentation from the the Cisco reps only to have it change again two years later. So, yeah, I'm I'm really excited about the news about the APs, not so much about the licensing.
Yeah. I I will say that I've always loved, like, what Meraki is, you know, and what they do. But I've honestly not understood why Meraki didn't become just Cisco a long time ago.
And I also don't understand why Cisco has this habit of of not exactly competing with itself, but creating product lines that appear to compete with itself. Again, like the Cisco and Catalyst and, like, that kind of stuff. So I will simply say that, for me, this isn't a it's about time thing, and I hope it ends up being easier easier and simpler for customers to buy the right gear. They don't have to think about, well, you know, I got, you know, fifty percent Cisco, and I wanted to get some Meraki. But if I get Meraki, I can't get the signal. I don't like that kind of stuff.
Yeah.
Yep. It just makes it easier. I also wanna point out the the article did mention that there's a an off ramp for folks who have one or the other and they wanna get to this new thing, but they don't have to but they don't have to. The license the old license will still persist for a while.
I mean, I I used to joke when I was studying for the CCIE, which I never passed, by the way, but I used to joke with folks that they should make a CCIE licensing. Like, that was another track. Just focused on that because it was, you know, convoluted, and then it changed so often.
Can I just go to ring?
Yeah. Right. I do think, though, that one of the reasons, based on my experience working with them over the years, is that, you know, they have multiple BUs, multiple business units that operate somewhat independently or, you know, maybe very independently of one another. And I I know that's there were attempts to change that over time, you know, unifying, leadership and and, overall vision and direction. But I I believe that that is probably one of the contributing factors, if not a main contributing factor to why Meraki and Cisco, kind of existed separately even after the acquisition for a while.
Yeah. I mean, it I think that strategy actually makes sense when they first did the acquisition. Right? I mean, when Meraki was acquired by Cisco, they had, as you know, saying really good product. They had a really good following in the market.
It it takes time for Cisco to figure out where is this going to fit into the broader parent organization, right, into which BUs.
They have a history and they're not the only one of having some some wars going on between their BUs that have competing products that have some overlap in use cases, some overlap in features. When I worked at Juniper, we had, you know, some of the same things. There were some that, you know, if you wanted you know, MPLS backbone is gonna be the, you know, the service provider BU products, the MX and that kind of stuff. If you wanted enterprise switching, well, there's some overlap between the QFX and the EX.
It's like, which one do I use in which environment? That's just, I think, natural as you grow and scale as a hardware vendor. You're gonna have some overlap. But, yeah, I think, you know, integrating in its time, as Leon was saying.
Right? They I don't remember what year the acquisition was. I'm gonna say it was, like, twenty ten or eleven. I mean, it's been a number of years.
So it it's probably It's been ages. The ages.
Yeah.
It's been ages. So I think it's it is, like, you know, it's also interesting that they're applying AI to managing the the RF, as you were saying, Phil. It's something that, Mist was well, I guess still is really good with the products that that Juniper has acquired. Right?
And if you look at what Juniper has done since they acquired that company, they've combined it with Apstra, kinda tried to combine it into their campus and data center switching. So you have this integrated ecosystem, experience as a customer. I think that's where Cisco is going. Right?
You can buy the catalyst switches. You can do SD access to manage it all. You can do all your wireless. Like, they're trying to combine everything.
That's why a company buys in to a particular vendor and gets locked in, using air quotes here, to a vendor is because it gives you simplicity in your operations when you have these things integrated. That's what customers want if they're gonna, you know, buy from a single vendor.
Yeah. Right. Right. So I saved our next headline for last on purpose because it is probably, one of the most important headlines of the week. Now I say that in jest, but it is actually sad, a, headline out of The Verge from November seventh. Ed Elwood Edwards, who is the voice behind AOL's You've Got Mail, which I'm sure all of us remember, dies at age seventy four. So very sad.
So Elwood Edwards is the iconic voice behind AOL's You've Got Mail and others. He passed away just recently after a long illness.
And interestingly enough, his his journey with AOL is is not like he worked at a at AOL as a as an engineer. It began way back in nineteen eighty nine. What happened was his wife worked at Quantum Commute Computer Services, which later became AOL.
It learned that the company needed a voice for its software. You know, they were just looking for some kind of a voice, and and, apparently, mister Edwards has a broadcasting background. And so he recorded some phrases like welcome, files done, goodbye, and also the very famous, you've got mail, on a cassette tape, if you remember cassette tapes. I I very much remember them.
And, he recorded them for two hundred bucks. It was just a test, but they became incredibly popular. And if you remember, using a computer in the early to mid nineties, those those sounds, those words, and his voice became synonymous with the the early Internet experience. So, what what's interesting, and I I had no idea until I started researching this, is that Edwards actually made some appearances on The Tonight Show with Jimmy Fallon.
He was been he was on the Simpsons, which doesn't surprise me because I think everyone's been on The Simpsons.
And he even worked as an Uber driver at one point. So, very sad news, but also just something interesting, for, for our industry and and then also for many of us considering our advanced age.
So, first, I wanna point out there's a Cleveland connection, because mister Edwards worked at WKYC, which is the local Cleveland industry affiliate here. He was a graphic supervisor. He also worked as a camera operator. He was he he was both in front of and behind the camera, although most of his work for most of his career was behind the camera.
But beyond that, I think there's a an interesting metaphor here about the technology that we know and come to love, and it doesn't just have distinctive sounds, which it does. I think we all have a set of sounds that means of whether it's the clacky keyboard, you know, the I associate with the old IBM PC clacky keyboard, but other people have, you know, or, you know, various logins or sound whatever it is. Right? That that the technology itself has distinctive sounds, but also there's a feeling that we begin to associate with that sound that is incredibly personal.
It's not that everybody feels the same way about the clacky keyboard noise or any of those. And there's a reality that no technology lasts forever, but sometimes the sounds and feelings outlive not only the tech, but the the people who were involved in that technology in the first place.
So I I wonder about how mister Edwards' kids and grandkids will feel years from now when they continue, whether they're watching that movie or, any sort of, you know, throwback to that era, if they're gonna suddenly realize, oh my gosh. That's grandpa. Yeah. You know, like, they're gonna they're gonna hear his voice come at them from unexpected directions. Yeah. Interesting.
Yeah. My my personal story here is, my family got our first home PC, which was a Dell. Doesn't really matter for the purpose of the story, but, I modified the sound that it would make when an email was received to that very sound, the you've got mail sound. And I remember getting in trouble with my parents for that because I was hacking around changing things. Like, Peter, my dad sat down to check his email and made these weird sounds. It's like something just etched that voice. His voice has etched in my, my memory from that for getting in trouble for one of my early hacking, episodes, I guess, you would say as a kid.
So Yeah.
You know, that's that's actually an interesting thought. All of those different, sensory inputs that were part of our formative years, if you grew up in the, you know, in the eighties and nineties with early computers or the sounds of early keyboards and the look of an eight bit animation on the screen and, you know, smells and and the way things feel, all that stuff. I don't know what smells are. I guess maybe, like, the smell of the dust burning of the CPU or something when you power on a beast of a machine.
I really don't know. But in any case, something interesting to discuss perhaps in a later episode when we have more time. But for now, I would like to move on to upcoming events. Very important.
We have, several upcoming events in November and December that we wanna highlight. So starting off with November nineteen to twenty one, we have Microsoft Ignite in Chicago. Justin, I believe you are gonna be there.
Nope. Not gonna be able to make that one.
Unfortunately, overlaps with AutoCon, you know, which you're gonna talk about here in a minute.
That's right. Well, speaking of AutoCon, November twenty to twenty two is AutoCon 2 in Denver. So, yes, Justin, I knew you were gonna be there along with me, but, I know you like to piggyback things and you're always on the road.
AutoCon 2 is, the next iteration of the network automation forums conference, where I think they sold out in, like, two or three days. Maybe it's not two or three, but they sold out very quickly. So a very interesting event to keep an eye on for the future. We have, on November twenty and twenty one, Philip field day twelve, in Silicon Valley, but live streamed as well. So make sure you go to the tech field day website for the live stream on those days.
December two through six, we have AWS re Invent in Las Vegas. Leon, I believe you're gonna be there for that one.
I'm gonna be That's cool.
I'll be there for that one as well.
And we're gonna have lots of goodies too.
So And that's why I said that about the other thing, Justin, because you seem to be at all the events at all. Yeah.
Yeah.
But make sure to go visit the booth and say hi to Leon and Justin if you're there. On December fifth is the Nebraska networking user group, any NUG, in Omaha.
For more information about other events like that one and others in your area, go to the USNUA and take a look there. On December twelve, I will be in Chicago, for the Tech Talk Summit to talk about AI, along with our partners Myriad360, which should be fun. I love talking about AI these days and, dispelling some myths. So for now, thanks so much for listening. Those are the headlines.