Mobile Monitoring Solutions

Close this search box.

AWS Announced New Feature Fine-Grained Visual Embedding for Amazon QuickSight

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

Recently, AWS announced a new feature, Fine-Grained Visual Embedding, for its cloud-scale business intelligence (BI) service Amazon QuickSight allowing customers to embed individual visualizations from Amazon QuickSight dashboards in high-traffic webpages and applications.

With Amazon QuickSight, customers can seamlessly integrate interactive dashboards, natural language querying (NLQ), or the entire BI-authoring experience into their applications –making data-informed decisions easier for their end users. The new Fine-Grained Visual Embedding feature will allow them to embed any visuals from dashboards into their applications.


If visuals are updated or source data changes, embedded visuals are automatically updated, and as website traffic grows, they are automatically scaled. Data access is secured by row-level security that ensures users can only access their own data.

There are two ways for users to leverage the Fine-Grained Visual Embedding: either by the so-called 1-Click Embedding, or QuickSight APIs to generate the embed URL. 1-Click Embedding is intended for nontechnical users to generate embed code that can be inserted directly into internal portals or public sites. In contrast, ISVs and developers can embed rich visuals in their applications using the APIs.

For 1-Click Embedding, there are two types: 1-Click Enterprise Embedding, which allows users to enable access to the dashboard with registered users in their account, and public embedding, where users can allow access to the dashboards for anyone.

In addition to 1 Click Embedding, users can perform visual embedding through the API – using the AWS CLI or SDK to call the API GenerateEmbedUrlForAnonymousUser (embedding visuals in an application for users without provisioning them in Amazon QuickSight) or GenerateEmbedUrlForRegisteredUser (embedding visuals in an application for users that are provisioned in Amazon QuickSight). When calling the API, users will have to pass the Dashboard, Sheet, and Visual IDs, which are available in the menu section for the selected visual.


Tracy Daugherty, general manager of Amazon QuickSight at AWS, told InfoQ:

Customers love that Amazon QuickSight makes it easy to perform and share advanced analytics without data science or infrastructure management expertise. With Fine-Grained Visual Embedding Powered by Amazon QuickSight, we are now making it even easier for customers to deliver powerful insights to users where they need them the most—from high-traffic webpages and applications to internal portals—with the simple click of a button.

Currently, the Fine-Grained Visual Embedding feature is available in Amazon QuickSight Enterprise Edition in all supported regions. Furthermore, pricing details of Amazon QuickSight are available on the pricing page.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

MongoDB, Inc. (NASDAQ:MDB) Shares Sold by Profund Advisors LLC – ETF Daily News

MMS Founder

Posted on mongodb google news. Visit mongodb google news

Profund Advisors LLC cut its position in shares of MongoDB, Inc. (NASDAQ:MDBGet Rating) by 24.3% in the 1st quarter, according to the company in its most recent disclosure with the SEC. The fund owned 490 shares of the company’s stock after selling 157 shares during the quarter. Profund Advisors LLC’s holdings in MongoDB were worth $217,000 as of its most recent filing with the SEC.

Several other hedge funds and other institutional investors have also recently bought and sold shares of the company. Confluence Wealth Services Inc. acquired a new position in shares of MongoDB in the fourth quarter worth $25,000. Bank of New Hampshire purchased a new stake in shares of MongoDB in the first quarter valued at $25,000. Covestor Ltd purchased a new stake in shares of MongoDB in the fourth quarter valued at $43,000. Cullen Frost Bankers Inc. purchased a new stake in shares of MongoDB in the first quarter valued at $44,000. Finally, Montag A & Associates Inc. grew its position in shares of MongoDB by 200.0% in the fourth quarter. Montag A & Associates Inc. now owns 108 shares of the company’s stock valued at $57,000 after purchasing an additional 72 shares during the period. 89.85% of the stock is owned by institutional investors and hedge funds.

MongoDB Price Performance

Shares of NASDAQ MDB opened at $330.72 on Wednesday. MongoDB, Inc. has a 1-year low of $213.39 and a 1-year high of $590.00. The business has a 50-day moving average of $317.18 and a 200-day moving average of $331.20. The company has a debt-to-equity ratio of 1.69, a current ratio of 4.16 and a quick ratio of 4.16. The company has a market capitalization of $22.53 billion, a price-to-earnings ratio of -68.33 and a beta of 0.96.

MongoDB (NASDAQ:MDBGet Rating) last released its quarterly earnings results on Wednesday, June 1st. The company reported ($1.15) EPS for the quarter, topping the consensus estimate of ($1.34) by $0.19. MongoDB had a negative return on equity of 45.56% and a negative net margin of 32.75%. The business had revenue of $285.45 million for the quarter, compared to analysts’ expectations of $267.10 million. During the same quarter in the previous year, the firm posted ($0.98) earnings per share. The company’s revenue was up 57.1% compared to the same quarter last year. On average, equities analysts anticipate that MongoDB, Inc. will post -5.08 EPS for the current fiscal year.

Wall Street Analyst Weigh In

A number of brokerages have recently commented on MDB. UBS Group lifted their price objective on MongoDB from $315.00 to $345.00 and gave the company a “buy” rating in a research report on Wednesday, June 8th. Piper Sandler lowered their price objective on MongoDB from $430.00 to $375.00 and set an “overweight” rating for the company in a research report on Monday, July 18th. Robert W. Baird began coverage on MongoDB in a research report on Tuesday, July 12th. They issued an “outperform” rating and a $360.00 price objective for the company. Canaccord Genuity Group lowered their price objective on MongoDB from $400.00 to $300.00 in a research report on Thursday, June 2nd. Finally, Royal Bank of Canada boosted their price target on MongoDB from $325.00 to $375.00 in a research report on Thursday, June 9th. One research analyst has rated the stock with a sell rating and seventeen have given a buy rating to the company. According to MarketBeat, MongoDB currently has a consensus rating of “Moderate Buy” and an average price target of $414.78.

Insider Buying and Selling

In related news, CTO Mark Porter sold 1,520 shares of the business’s stock in a transaction dated Tuesday, August 2nd. The stock was sold at an average price of $325.00, for a total value of $494,000.00. Following the transaction, the chief technology officer now owns 29,121 shares in the company, valued at $9,464,325. The transaction was disclosed in a legal filing with the SEC, which is available at this hyperlink. In other MongoDB news, CTO Mark Porter sold 1,520 shares of the company’s stock in a transaction that occurred on Tuesday, August 2nd. The stock was sold at an average price of $325.00, for a total transaction of $494,000.00. Following the sale, the chief technology officer now owns 29,121 shares in the company, valued at $9,464,325. The sale was disclosed in a legal filing with the Securities & Exchange Commission, which is available at this link. Also, Director Dwight A. Merriman sold 629 shares of the company’s stock in a transaction that occurred on Tuesday, June 28th. The shares were sold at an average price of $292.64, for a total value of $184,070.56. Following the sale, the director now owns 1,322,755 shares in the company, valued at approximately $387,091,023.20. The disclosure for this sale can be found here. Over the last 90 days, insiders have sold 40,795 shares of company stock worth $11,602,761. 5.70% of the stock is currently owned by corporate insiders.

MongoDB Profile

(Get Rating)

MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.

Featured Stories

Want to see what other hedge funds are holding MDB? Visit to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDBGet Rating).

Institutional Ownership by Quarter for MongoDB (NASDAQ:MDB)

Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with’s FREE daily email newsletter.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

MongoDB Stock: More Metrics Needed for Investors – Nanalyze

MMS Founder

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

MongoDB Stock: More Metrics Needed for Investors – Nanalyze









Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

(MDB) – Earnings Outlook For MongoDB – Benzinga

MMS Founder

Posted on mongodb google news. Visit mongodb google news

MongoDB MDB is set to give its latest quarterly earnings report on Wednesday, 2022-08-31. Here’s what investors need to know before the announcement.

Analysts estimate that MongoDB will report an earnings per share (EPS) of $-0.28.

MongoDB bulls will hope to hear the company to announce they’ve not only beaten that estimate, but also to provide positive guidance, or forecasted growth, for the next quarter.

New investors should note that it is sometimes not an earnings beat or miss that most affects the price of a stock, but the guidance (or forecast).

Historical Earnings Performance

Last quarter the company beat EPS by $0.29, which was followed by a 18.56% increase in the share price the next day.

Here’s a look at MongoDB’s past performance and the resulting price change:

Quarter Q1 2023 Q4 2022 Q3 2022 Q2 2022
EPS Estimate -0.09 -0.22 -0.38 -0.39
EPS Actual 0.20 -0.09 -0.11 -0.24
Price Change % 18.56% 18.58% 16.42% 26.33%
Quarter Q1 2023 Q4 2022 Q3 2022 Q2 2022
EPS Estimate -0.09 -0.22 -0.38 -0.39
EPS Actual 0.20 -0.09 -0.11 -0.24
Price Change % 18.56% 18.58% 16.42% 26.33%

Stock Performance

Shares of MongoDB were trading at $332.07 as of August 29. Over the last 52-week period, shares are down 15.95%. Given that these returns are generally negative, long-term shareholders are likely bearish going into this earnings release.

To track all earnings releases for MongoDB visit their earnings calendar on our site.

This article was generated by Benzinga’s automated content engine and reviewed by an editor.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Research Assistant (Engineering, ALSET) job with NATIONAL UNIVERSITY OF SINGAPORE

MMS Founder

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Job Description

The Institute for Applied Learning Sciences and Educational Technology (ALSET), in conjunction with the Nanyang Technological University and Singapore Polytechnic, will be building an AI-supported activity orchestration system designed to enhance learning during collaborative activities. The solution uses an existing codebase for the orchestration system built with the React.js framework and leverages Google’s Firestore/Firebase noSQL database. The research engineer will be instrumental in developing new system features, integrating the system with an AI engine and evaluating the efficacy of the solution with polytechnic and university students.

The job scope of the candidate includes:

  • Developing new features for the orchestration system with Node.js and React.js
  • Integrating AI modules into collaborative learning activities
  • Processing and selectively sharing event data with collaborators
  • Automate data analyses for stakeholder reports
  • Collaborating with NLP experts to improve activity designs
  • Conduct user experiments with students remotely and in-person

This description is a summary only and only describes the general level of work to be performed, it is not intended to be all-inclusive.


  • A bachelor degree in Computer Science or related field
  • Proficiency with AI techniques and user experience research methods
  • Deep understanding of Node.js and React.js
  • Experience integrating React.js noSQL databases
  • Understanding of dashboard design principles
  • Superior organization skills and dedication to completing projects in a timely manner

Job Competencies

  • Excellent proficiency with JavaScript and noSQL
  • Excellent interpersonal skills
  • Rapid prototyping and iterative design principles
  • Previous experience with building production-ready systems

Covid-19 Message

At NUS, the health and safety of our staff and students are one of our utmost priorities, and COVID-vaccination supports our commitment to ensure the safety of our community and to make NUS as safe and welcoming as possible. Many of our roles require a significant amount of physical interactions with students/staff/public members. Even for job roles that may be performed remotely, there will be instances where on-campus presence is required.

Taking into consideration the health and well-being of our staff and students and to better protect everyone in the campus, applicants are strongly encouraged to have themselves fully COVID-19 vaccinated to secure successful employment with NUS.

More Information

Location: NUS, Kent Ridge Campus
Department: ALSET
Job requisition ID: 15161

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Why Cloud Databases Need to Be In Your Tech Stack

MMS Founder

Posted on nosqlgooglealerts. Visit nosqlgooglealerts

Ravi Mayuram
Ravi is CTO of Couchbase. He previously held engineering roles at Oracle and Siebel Systems.

Enterprises are seeking ways to modernize their operations. Emerging as a critical element in the digital transformation tech stack are cloud databases.

Having the right cloud database can help address the range of applications companies depend on and also those that they build, from the cloud to mobile and edge. For companies aspiring to provide better and more personalized customer experiences, implementing a DBaaS (database as a service) should be a key consideration.

Databases Are Fundamental to Enterprise Business 

Cloud databases enable organizations across every sector — including retail, transportation, gaming, healthcare and banking — to perform the functions that traditional databases do but, at an accelerated pace and with more flexibility and scalability. Databases provide free reign for companies to manage configurations, analyze data and update processes according to their unique business models.

Enterprises depend on databases to keep pace with an increasingly digital world. A retail vendor, for example, is now able to offer more and faster transactions online, creating experiences designed to meet customer needs.

A global apparel and retail company that delivers premium styling with over 1,800 retail stores worldwide was able to launch its first digital showroom with the power of a cloud database. The scalable database technology ensures high performance, offline availability and real-time data synchronization for a showroom experience that runs smoothly even if the internet goes down.

In another real-world example, an international airline with a crew of over 41,000 pilots, flight attendants and flight schedulers operates 1.5 million flights a year. Using the edge and cloud capabilities in its cloud database, the airline has digitized its pre-flight check process by syncing data across crew tablets in real-time — even when devices are disconnected or when an internet connection is nonexistent. Another leading travel industry company is providing faster booking times and requiring less data to be transferred for its more than 3 million mobile app users.

Enterprises Require Databases That Scale Affordably

Companies need to operate at a constantly increasing scale — more data, more speed, more customer touchpoints. IDC estimates that there will be 41.6 billion connected IoT devices, or “things,” generating 79.4 zettabytes (ZB) of data in 2025. The only way to keep up with this moving train is to have a cloud database that can handle huge amounts of data and can do so with extreme agility and low latency.

There are two types of scaling: horizontal (adding more nodes to a system) and vertical (adding more resources to a single node). Relational databases of old are not elastic, as in they cannot scale based on the volume and velocity of data access. They are built more like airplanes. If you want to add 20 more seats to your flight, you have to get a new plane that is built with 20 more seats. In other words, you can’t extend this plane to accommodate 20 more passengers. This is vertical scaling.

Cloud databases are built more like trains. If you want to add 20 more seats to your popular train route, all you have to do is add another coach.

On the other hand, cloud databases are more like trains. If you want to add 20 more seats to your popular train route, all you have to do is add another coach. You can further customize the train by adding more dining cars, cargo cars or passenger cars based on the route and payload specific to that route.

This fundamental difference is what makes cloud databases an important addition to your stack for modernizing your applications. A distributed NoSQL database that can scale to your needs will meet performance demands at an affordable cost while using fewer servers, making it possible to support large numbers of concurrent users.

Give Developers Database Technology They’ll Love

Developers are building applications that will push their companies forward as digital enterprises and a cloud database makes the process easier and more efficient. A NoSQL cloud database can power web, mobile and IoT applications while enabling developers to use languages and processes they already know and love. But why is a distributed NoSQL database the top choice for developers? Developers appreciate the speed, accuracy and low latency it offers.

A NoSQL database supports agile development to keep pace with evolving application requirements. The entire NoSQL movement came about to solve the schema rigidity of relational systems and to solve the object-relational impedance mismatch.

With NoSQL databases, the friction between frontend app developers versus the backend database-facing developers is over.

With NoSQL databases, the friction between frontend app developers versus the backend database-facing developers is over. There are no separate data model changes and object model changes. The application and object model change is automatically reflected in the database.

This eliminates the need for committees and approvals to get simple data model changes into production. For the first time, databases are now within the CI/CD pipeline. This greatly improves the agility of development teams, and is just as important as the ability to operate a fully geo-distributed database with continuous availability.

Cloud Databases Provide a Competitive Edge and Support Digital Transformation Initiatives

Gartner predicts that more than 85% of organizations will embrace a cloud-first principle by 2025 and will not be able to fully execute on their digital strategies without the use of cloud native architectures and technologies. Cloud databases can be the catalyst for enterprises to transform digitally in an era where scalability is a necessity, even during economic uncertainty. Throughout the ups and downs in the economy, business resiliency is of utmost importance — and the right database can support this.

As digital transformation picks up steam, more and more applications are operating both at planet scale and at the edge. Distributed cloud databases that can meet the needs at both ends of this spectrum are the ones that businesses are increasingly looking to solve their next-generation problems. We are excited to be part of this journey with our customers and we look forward to even more foundational innovations to meet their needs of tomorrow.

Click here to learn more about how Couchbase is enabling customer success with our NoSQL cloud database.

Feature image via Pixabay.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

The Road to Removing JQuery from

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

When the team that maintains the website faced the issue of updating their old and outdated jQuery dependency, they decided instead to get rid of it altogether. Among other benefits, they achieved a not negligible performance improvement and, in the process, created a migration guide for other developers to tap into.

As senior developer Andy Sellick explained, the project was born as a side-project to remove tech debt and legacy code, but it eventually brought benefits in both performance and security alongside with reducing maintenance complexity.

From a developer perspective removing jQuery has been a long but worthwhile process. Rewriting our code has taught us a lot about it and we’ve expanded and improved our tests. In many places we’ve been able to restructure and improve our code, and in some cases remove scripts that were no longer needed.

The task was a large one, with over 200 scripts spread across multiple applications and pages, plus their corresponding tests. On the positive side, each script could be rewritten as an isolated piece of work, which made contributions much easier across the whole organization. This made it possible, for example, for backend developers to get involved in the process and speed it up.

The team also realized the task had not to be an “all or nothing” endeavor to turn out successful. Indeed, where specific applications used jQuery in more complex and larger scripts, the team decided to give them their own jQuery dependency to avoid slowing down progress while still ensuring most of the website would be migrated.

Speaking of the benefits the migration brought, security is a clear winner, since was stuck on seriously outdated jQuery 1.12.4. Furthermore, the migration got rid of a major maintenance problem in itself.

The team summarized several performance improvements they got in a second article, addressing thus some of the initial criticism. This seemed to focus mostly on the argument that jQuery has a negligible size in the context of modern network speeds and caching. On the contrary, while jQuery is indeed only 32 Kb in size, head of frontend development Matt Hobbs stressed the fact that JavaScript libraries are render-blocking resources. This means they have a significant impact on the time it takes for a page to fully render in addition to sheer page load times, with an improvement of 17% and 8% respectively. Additionally, the time required for the web pages to become interactive also improved by 17%. The benefits are even greater for visitors using slower devices or experiencing adverse network conditions, as Hobbs detailed in a Twitter thread that is a great example of how to measure web performance.

As mentioned, the team also created a specific guide to remove jQuery from an existing JavaScript codebase, including a list of common issues and hints at how to deal with syntactic and standard library differences.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Presentation: Building Applications from Edge to Cloud

MMS Founder
MMS Renato Losio Kristi Perreault Luca Bianchi Flo Pachinger Tim

Article originally posted on InfoQ. Visit InfoQ


Renato Losio: In this session, we are going to be chatting about building applications from edge to cloud, so understand as well what we mean by edge and what we mean by from edge to cloud.

Before introducing today’s panelists, just a couple of minutes to just clarify what we mean by building applications from edge to cloud. Edge computing has added a new option basically, in the last few years, delivering data processing, analysis, storage close to the end users. We’ll let our experts define what we mean by close to the end user, what the advantages are. Let’s see as well what we mean by from edge to cloud. Data is no longer confined to a data center or to a bucket in a specific region, but it’s just generating large quantity at the edge, even ever growing amounts and processed and stored in the cloud. Cloud providers, not only the big cloud providers, but many other ones as well, extended their infrastructure as a service to more locations, more entry points, integrate different options that offer new options at the edge that’s as well increased the complexity and the choice for developers and for a project. We’re going to discuss the benefits and limitation of edge technologies. We look at really, how can we adopt them in our real life project?

Background, and Journey through the Edge

My name is Renato Losio. I’m a Principal Cloud Architect at Funambol. I’m an InfoQ editor. I’m joined by four industry experts at least that have experience on the edge side. That is definitely not my field. I’d like to give each one of you the opportunity to introduce yourself and share about your own journey or your own expertise on the edge part, and why you feel that is important. I will start with just basically asking, give a couple of sentences to say who you are, and where you’re coming from in this journey to edge.

Luca Bianchi: I am Luca Bianchi. I’m CTO of Neosperience, which is an Italian based company focused on product development and helping companies build great products. During my career, I had to face a number of challenges of companies needed to push the data back from the cloud to the edge. We started with the marketing domain where data needed to be reached at the edge. Then in recent years, I started focusing into healthcare or security for machine learning. I am quite in the middle of the journey to the edge. I don’t know which part is behind and which part is in front of me, but I have enough scars about that.

Tim Suchanek: It’s a dear topic to me, as I’m spending my full time working on this topic. I’m Tim, CTO of Stellate. We founded the company last year to make it easier to cache GraphQL at the edge. While it’s great to bring the whole application to the edge, there’s a lot happening in order to make it possible for maybe older applications but also new applications to benefit from the edge we’re building at Stellate, so that you can cache at the edge. That’s what we’re doing.

Flo Pachinger: I’m Flo. I’m a Developer Advocate at Cisco. I’m pretty sure this is my topic, with edge compute, because I did already some IoT projects, specifically for edge compute, like in the manufacturing space a lot in Germany. We had a lot of challenges already there, how to connect and then get the data in an aggregated or in a better form, basically, to the cloud. Yes, also work on some computer vision stuff.

Kristi Perreault: My name is Kristi Perreault. I am a Principal Software Engineer at Liberty Mutual Insurance and an AWS serverless hero. I work in the serverless DevOps space at Liberty Mutual. We’re pretty heavy on the cloud computing side of this journey. I definitely know that there are some folks working on the edge still. I’ve also had some personal experience in doing so in my own projects with some IoT, some robotics, some cloud computing.

The Main Benefits and Use Cases of the Edge

Losio: I will say that from your introduction, you are almost completely a different industry and experience. It’s quite obvious that edge technology is not just one sector. It’s not just machine learning. It’s not just IoT. It’s not just finance or whatever. It is helping in very different areas. More data is now collected and processed at the edge and used in very different technologies and location, what do you see as the main benefits and use cases? What are the use cases maybe we don’t talk about?

Pachinger: From my experience, it comes basically down to latency, and to have some compute or time directly at the edge. You have critical compute tasks of what you can do, and what needs to be computed at the edge. A classic example would be, like I had a robot arm demo and challenge on our last event, and basically, it requires real time communication. It was a C++ interface, and you need to move the robot arm around. If the latency was really high, or 500 milliseconds, like we’re talking about, like not 2 seconds, we’re talking about really low milliseconds, then you can’t do this. This application can’t be in the cloud, so it really needs to be on-prem there. Especially in the manufacturing space, with connecting the machines, just like having telemetry data, collecting telemetry data, safety requirement, of course, that the machine is shutting off, like this kind of logic there. This is something you had to have something at the edge there. Especially in what I see is in the manufacturing space, in utilities, in oil and gas, and these kinds of industries, this is remote places. You need edge compute there, because the latency or the way to travel to another compute node is just too far away.

Suchanek: Maybe before I dive into that, I want to quickly talk about the term edge. I think that with edge, it’s just easy to quickly define. One thing is that in the edge industry, for example, CDN providers, which just have hundreds of locations around the planet, the edge meant the edge of the network, meaning where they have the presence. The edge can also go a step further, as Flo already alluded to ML applications. In a sense, if you want to, you could define what Tesla does with GPUs in the cars, as edge as well. If you go that far, you could also say that every browser is an edge. I’m not sure if we want to go that far, but for coming a bit into the applications that we see, so for sure, when you’re in the API space, it’s quite useful to do certain things at the edge that you don’t do at your central server. Just because it’s separate infrastructure, it’s already a big advantage.

For example, we are building a rate limiting product right now. The fact that it doesn’t even have to hit my server, and I can protect the origin server, it doesn’t even have to hit it. That is a big advantage. We can dive into architecture later more, but I definitely see more a hybrid approach. I have not yet seen really large scale applications that are from scratch written at the edge. They’re slowly coming. You basically need all the capabilities that you have in the central cloud at the edge, and they’re slowly coming. From our side, what we do is that we make it easy to cache GraphQL responses at the edge. GraphQL is an API protocol that Facebook came up with a few years ago. It’s, in a sense, an alternative to REST. The advantage of GraphQL is that it has a very clear set of boundaries and like a specification, how every request and response can look like, and so you can build more general tooling. That’s why we built the caching. Because now with GraphQL, when you know the schema of a GraphQL server, you can do smart things and invalidate, because reads and writes are separated in GraphQL. That’s how we utilize it. Without GraphQL, the edge caching there from APIs is a bit cumbersome, more cumbersome, and now that is a new opportunity how we are utilizing it.

Losio: Actually as well, I gave it for granted what edge is, but actually different people might consider edge very different scenario. I was even thinking, maybe because I’m really cloud oriented, for me edge was really a point of presence of the cloud provider, but actually it’s not necessarily even that. Actually, even those ones keep growing.

Limits of Cloud Providers Adding Zones and Functionality Closer to End Users

What is, for example, your point of view in where we are going as well with the cloud? It’s obvious that all the providers are adding zones, are adding new functionality closer to the end user. Where do we see that limit? Is that just a latency point, or there’s some other advantages?

Bianchi: I think that there are a number of advantages not only based on the latency value, which is a big point. We had a project in which our customer needed to manage vehicles on the highways and provide feedback on the vehicles while people were driving. They had very high latency constraints. Usually, I also see that when you enter in the machine learning domain, you basically had constraints related to bandwidth. Basically, I have the opportunity to work on the video processing applications, in which if you process the video stream using machine learning model, or at the edge, and here, we need to explain what we mean by at the edge because you can be directly on the camera, or maybe you can be in a data center close to the camera. It is a number of different things, but you are definitely at the edge, then you’re bringing out from the stream only the insights. You’re already using the bandwidth needs. This is one thing.

The other thing, especially important for the healthcare domain, but not only for the healthcare domain is about data locality. Because sometimes you cannot afford to shift peoples’ data for regulatory reasons or privacy reasons, you cannot shift data out from the legal boundaries of the owner. With machine learning at the edge, you can deploy the model directly from the cloud. You can train the model, you can manage everything in the cloud, and then deploy the model at the edge, then process the data and extract all the anonymized or insights from the edge.

Developer Perspective on Cloud Latency

Losio: You actually already mentioned a good point that is just not simply a problem of latency. I don’t know if you have any specific example as well. In your case, you mentioned that you work a lot on the cloud side as well as a team with some on the edge side. From a developer perspective, does anything change? I’m always this cool person that I’m playing with the cloud. If I’m coming from a developer side, probably the only thing I think are, ok, edge is maybe something new but until now, I always had to think about the latency of my request or the response time or whatever of my API that I was developing. Do I need, as a developer in the cloud, to think about that, or is that something for someone else?

Perreault: Yes. I like that Tim took a step back here and actually thought about edge as different definitions too, because I definitely have a different viewpoint than the folks here. We’re all insurance, we’re financial data. We’re not really using things that are hardware or machine based. When I think of edge, I just think of it in the traditional definition of having your compute and your processing really close to your data and your data storage. I think that we do that in a hybrid mix with the on-prem that we still have, as well as our cloud computing and things. In terms of a developer, I always think it’s just important that you should understand every step of that process, even just at a very basic level, in terms of things like vocabulary is changing all the time, things become buzzwords and overused, and those kinds of things. I think it’s important to know what edge is. Maybe not even just the traditional definition or anything, but how that helps you and your company.

We’ve hit on latency a lot, but one thing that our company is concerned with is cost. That’s something that we’re going to be thinking of in terms of edge computing and your cloud solutions as well. That’s something that you should be thinking of as a developer, too, because you want to make sure that you’re building applications that are really well architected right now, is a big focus for us. That involves all of those pillars of security, of cost, and of performance and reliability.

The Cost of Edge Compute

Losio: That’s an interesting point of view. I never thought about the cost part. Are you basically suggesting that doing it at the edge is cheaper because the data doesn’t have to go around the world? Or it is actually more expensive because I’m paying for data centers that the cloud is, I’m not saying taking advantage of this scenario, but maybe the local zone is a bit more expensive than a data center far away?

Perreault: I think it depends on what you’re doing and what you’re working on. I think that that’s what I mean when I say like, you just got to be educated on all the different options, because in some cases, it might be cheaper for you to go to the edge than it is to keep all of that up in the cloud, and do all of the processing, all of the data, and you’re worried about performance and things. It might be cheaper in that aspect to do it for edge. It really depends on your use case, how much data you’re processing, what you’re working on, and how many other services or things you’re interacting with. Then I think of, in the cloud, there’s almost edge computing in a way. If you’re going multi-region, or cross account, or doing different availability zones, that might not be optimized. It might help from a security perspective, and it might help from a data backup perspective, but it’s not going to help from a cost perspective. Just different things to think about, I think, in terms of insurance and financial lens.

Pachinger: We did a calculation there. It’s completely true, it really depends on the use case. One example of what we did is like, especially at the remote places, where we placed an industrial router there, we did actually edge compute on this industrial router, which interconnectivity was 3G and 4G. You can definitely see if you don’t compute this at the edge, then you have costs at the cellular level, like with the service provider there. Plus, you have also cost at AWS or like also at the public cloud provider. You had double the cost there. There, we did the calculation, and yes, without any surprise, it was like, go for edge compute. That’s a no-brainer. Yes, it really depends on accessibility. Like, is it remote? How much can we leverage actually classic like internet or cable and extend what we have there? This was the use case, and then it was clearly go for edge compute.

Losio: That’s actually a good point, because I remember not a while ago that I saw, for the very first time in my life, I had one of those Snowcone devices that you borrow from the cloud provider with a lot of storage. There was as well some computing on the machine, I was like, are we just going to do any computing on the machine? Then I realized that actually there are use cases, maybe they’re not my use case, but there’s really benefit, because maybe that box is going to travel for two weeks before going back, so you can take advantage of that, but as well that not necessarily when you hire with the connection, it might be that cheap to do it in any other way.

Regulatory Compliance and Edge Compute

In that sense, do we see a compliance rule as well? I was thinking about the finance world, but not just that one, Luca mentioned the data has to stay in a specific country. Do we see that as well as the edge problem, or is it entirely different more legal problem, or whatever? I don’t know if you have a shared view on that one. As you said, the edge can be many things. I remember actually, you really considered that as part of the edge as well.

Suchanek: I think when it comes to data compliance, I just have to quote our lawyers when we talked about GDPR, because a bunch of customers are asking about it. The lawyer said, only God knows if it’s GDPR compliant. The thing is that it’s really tough with GDPR compliance. It’s not like with other compliances like SOC 2 where you can just actually do an audit and you’re done. It’s a bit more complicated than that. The irony is that, for us, at least in caching, there were customers asking for caching something only in a specific region. It turns out that’s not needed. What I mean by that is, if I do a request from Europe, then usually we only cache in Europe. If I do the request from wherever, then we cache there apparently. According to our lawyers, that is enough for GDPR compliance. What some users were asking is to say, we never cache outside of Europe, even though I as a European citizen am traveling outside of the EU. That is my knowledge now. It’s a confusing field. All the bigger enterprise customers we have, their legal team then comes and talks to our legal team, and that needs to be figured out. I think there’s still a lot happening there to realize these things.

Perreault: Tim, if I can elaborate on that, too. Obviously, compliance is huge for us. We’re dealing with credit card numbers. We’re dealing with quotes and claims and insurance data and all that PPI. It’s a huge concern of ours. You hit on a really good point where we’re also global, so we have contractors everywhere. Something that you have to think about is when you’re caching this data it’s different in the U.S. where I live versus our Ireland office, or our contractors out in India, or some of those things. There is like a fear there. With on-prem, those boundaries are less blurry, but when you start thinking hybrid cloud, and there’s a fear of going to the cloud, and what does this look like, and what is security? How do we answer those questions? It’s not a one-size-fits-all solution, depending on what you’re doing.

Edge Tech and Hybrid Cloud

Losio: You just mentioned actually hybrid cloud. I was wondering if there’s any specific challenge or specific differences when we think about edge technology and hybrid scenario. Because, if I understand, there are two aspects, one you mentioned before, is I might have something a bit, not because I don’t want to go to the public cloud, but because I need the data next door, and next door is not the big cloud provider, whoever it is, Microsoft, Google, Amazon, whatever else, is not there. Probably, if I’m in Iceland, maybe I need to do it different with a local provider or whatever. I was wondering as well in terms of paradigm there’s still a very big difference between hybrid, or if everything is hybrid at the end, the moment you start to talk about edge?

Pachinger: From that perspective, it’s very interesting, because hybrid is, again, a definition case, from what we see is that you leverage several clouds there. Usually, what we see is that the majority of customers are hybrid cloud users, they’re using several clouds there. I think it’s not wrong to say that hybrid in a way also extends to the edge, to any specific data center or to any other service provider what we see there. There again, coming back to Tim’s suggestions, going back and say, we need to define as well where the edge is. Usually, I go through the Linux Foundation Edge Consortium or organization. They released a really cool white paper there, where everything is included. They say, this is basically a specific edge category for embedded hardware. This is more for the service provider. This is more for like using rack servers. It depends also a bit on the hardware, of course, where the edge is defined. As you said, it definitely fits to a hybrid cloud strategy there. It all comes down to the hardware, and also from the definitions, I usually tend to step back on how to define it and more to like, what does it do. We can be more clear on the things there.

Suchanek: Maybe about what you said Flo. I liked that point with like, where’s the boundary? I think that in a sense, you could argue that it’s a blurry line. The thing is that if you have one location, let’s say you use AWS, I think they already support over 40, 50 locations, if you take their local zones, Wavelength zones. Then, ok, one location is not actually edge, so then we take two, also not edge. When is it edge, 10, 20, 30, 40, 50? That’s the thing, and you just have multiple locations. You for sure need to put something in front. They, for example, have the Global Accelerator product, or you can use Route 53 for IP based routing or something like that. After all, I think as soon as you have a certain amount of locations, you probably can talk about edge. All those locations could basically run everything that a cloud can run. You can also check that out for AWS, specifically. I don’t have any affiliation with them, but we use it. You can check out for certain locations. They, for example, support the whole offering, than maybe in a small edge location, how they call it, less of the offering. After all, you have the cloud in a region, and then you just have many of those, and then you can call it edge in a sense. It’s blurry for me at least.

Bianchi: It’s blurry, but I think that there is quite a big difference between the edge computing, which is under the cloud provider domain. Basically, AWS, or whatever has the full control on the edge location, so it is simpler. If you need to deploy Lambda at edge, just to give you an example, it’s much more simpler. It’s not straightforward as deploying a Lambda directly on a given region, but it’s simpler. If you need to deploy something on an edge device, which is located within a company boundary, you have to face a number of complexities related to integration, to network security, and a lot of stuff that makes things a lot harder.

The Future of Edge Technology

Losio: I was wondering when you mentioned as well before about the use case for the edge technology. I was wondering if there is a new use case. If I think about where we were 10 years ago, 10 years ago there was really no edge, where at least. We were just talking maybe about a bit of caching for some, I’m thinking about Amazon maybe there was a bit there, before a load balancer you could use CloudFront. That was probably already there at the beginning. Five years ago, started some service at the edge, now it’s a very big topic. I was wondering in 5 years or 10 years, where’s the direction? Just new areas where we’re going to use edge technology or simply edge technology will become the norm and almost transparent to the service.

Suchanek: I think that’s an interesting one, let’s look into the future. I think that more is moving to the edge. Let’s assume we want to move a whole application to the edge, what would need to happen on a technical level without any centralized data storage anymore? We would for sure need to solve data, the easy part. Compute is not that challenging. Compute at the edge we got already. Data is the challenge. What options do we have there? Caching is one option. We still have maybe five, six main regions where we have our databases and they are replicated or sharded. The other option is that at the edge we could be sharding by user. Only the user data is stored there, that could be an option. What is obviously not scaling is that we would take terabytes of data to everything in edge location and replicate it 200 times. That’s not scaling. We would need to come up with some better solutions there, how to distribute the data. Then there are many approaches right now like FaunaDB, and whatever you’ve got.

I think the solutions are still quite early. I think there’s a saying that you shouldn’t trust any database that’s younger than five years. We’ll need to wait a bit for all the databases that just came out, before we use them in production for edge related things. Cloudflare released the SQLite approach. Also Fly does that where they stream between the SQLite and saving them, that still won’t give you strong consistency, it will only give you eventual consistency, meaning, if I do a write and a read directly after, I might see old data. That’s a tradeoff.

Losio: It’ll be an interesting tradeoff, because I wonder if it will be [inaudible 00:30:54] application that will accept that as a tradeoff knowing the importance of the milliseconds becoming small or smaller in that case.

Suchanek: I would be curious to hear Kristi’s view on that, because I think there are a bunch of applications that can accept that tradeoff of eventual consistency.

Losio: Probably finance is not one of the main areas.

Suchanek: Probably not.

Perreault: Again, it depends. Actually, the one thing that keeps like playing in my mind when you were talking about this was, it’s not new, but new to me, and what we’re exploring is more of like content delivery networks too. Because we’re a lot of frontend applications, so we have to serve up web pages and that’s how we gather our data. That’s one thing that is top of mind that I’m thinking of for edge locations, and for using computing on the edge. I think that there’s going to be a lot of that in terms of networking. Yes, to your point, financial data, insurance data, credit cards, that stuff, that’s going to be really hard to work around with the edge. Especially that brings in the compliance issue a lot too. Every country and globally, there’s different laws, there’s different policies, even by states here, too, it’s different. I know our process for working with California is completely different than working with Ohio. I think some of those, it’s going to be interesting. I wish I did have that glass ball to see who comes up with that solution and what that looks like. I think we’re interested in that. Right now, we’re all bought in on cloud and we’re headed that way. Hopefully, we’ll see how multi-region, and cross account, and all of that works out.

Pachinger: For me, what I see right now, maybe early adopter phase, or a bit of maturity level. A good appliance is computer vision, so where we have GPUs at the edge, and thinking about that’s nano, like in this super small size there. You can do a lot with cameras, with image detection. A classic example would be with vehicles there to detect something. Then persons, of course. We have lots of use cases where like, does the person actually wear their face mask or not, so to see? Again, in the GDPR, it’s like, are we actually violating the privacy of the user? It depends, of course, where you can deploy the cameras there. I think in the machine learning level, inference at the edge. This is still a bit early, but I see already some applications there. I think they will definitely grow and increase in that area.

Use Cases for ML at the Edge

Losio: Do you see any other use case increasing for machine learning at the edge?

Bianchi: I think that in the next three to five years, a lot of machine learning workflows will be pushed to the edge due to the fact that a number of specialized devices are coming, which is a good thing. On the other hand, I don’t think that right now we have a sufficient level of abstraction from the device. When you are deploying machine learning on the device, you still need to understand what is under your hardware, what kind of GPUs you’re using, what instruction set is supported? We have great tools, but they are evolving quickly. I don’t think that now we can abstract away from the hardware complexity. Hopefully, within the next five years we will be there and we will be able to take a machine learning model, cross compile it for the given whatever hardware, and then deploy that to the hardware. We are not yet there.

Suchanek: I think that the ML use case that has been mentioned, that makes total sense. Also, we will still see more use cases coming as we have compute as a general, let’s say, primitive available now with something like Cloudflare Workers. I think people are still looking for these use cases. I just want to mention one big one that I haven’t heard, which is A/B testing. There’s a whole industry around that. There’s, for example, LaunchDarkly, which is a startup basically just built on that, where at the edge, you make the decision, where I send the actual request routing, generally. That’s the big one. Anything that is right now, for example, the HTTP gateway, or API gateway from AWS, all these things, I think, are perfect candidates to run at the edge, so you can decide where the actual request goes.

The Open Standards and Risk of Vendor Lock-In at the Edge

Losio: As you mentioned, as well, some services from Cloudflare and AWS. One topic that usually when we think about edge technology in general [inaudible 00:36:00], all exciting new services, new products at the edge, not at the edge, is, what are the standards, what are open standards, what is the risk of a vendor lock-in, also, if I think in 5 years, 10 years?

Pachinger: It’s always a challenge, because we see this actually in the cloud providers as well there, that the more you develop your application towards one cloud, of course, they try to leverage it or say, ok, let’s be cloud native, or like, let’s go in this direction and make it easy for all cloud providers. I think at the edge it will be similar. However, we are leveraging a lot of like open source container technologies like containerization. I think it will stay at this level. Containerization is I think the way to go to. Then we have K3s, for example, as a really cool solution there. Then, classic like a hypervisor, maybe an embedded hypervisor, but nothing super big. Depending again on what edge we’re talking about, but talking about like the classic edge, or the smart device edge, or the constrained device edge there, this is where I see containerization there. Also, I think, with containerization, you can have it in this category. Then you can also have it in a data center in a rack server, for example. I think this is very important to have it open, to have no lock-in there, no vendor lock-in. This is of course very important. It’s like a lot of users, they like this. This would be from a software perspective there.

Losio: Do you see that as an issue in your experience? If I understand well, you’re very much an AWS focused company.

Perreault: Yes. Actually, I did want to add on to what Flo said, because I do agree. I think that one of the things that I really like about working at Liberty Mutual is that we are very open to whatever tools you want to use, whatever providers, whatever way you want to go, we are very much AWS driven and focused. We do have Azure support. We do have Google Cloud Support. There are folks that go to those ways. We do have some folks in the machine learning space or processing large amounts of data and data analytics that might prefer Azure and some of the tools that they have over there, over AWS. We also use containerization. That’s a great kind of half step from on-prem to cloud. That’s one that folks reference. We also use Kubernetes, Docker, Fargate, it’s all across the board.

With a really large company, the idea of vendor lock-in is a little different, because once we’re bought into something we’re bought in, and then it’s a long process to move, and to take that stuff out. We have about 3000 developers, 5000 people working in tech, and we’re 110 years old, so there’s a lot of data. We’re always acquiring new companies and things too. We’ve inherited some of their vendor lock-in or some of their tooling to bring that in and modernize, or to even go in that direction, if that’s a better solution. The idea of vendor lock-in is interesting, because we have what feels like every vendor all the time, and it hasn’t been too much of an issue. It’s just a matter of we’re going to be bought in on AWS, and like most of our expertise is there. If you choose to do something else, you’re more than welcome to, but some of that learning curve might be a little harsher and steeper for you.

The Roles Arising from Edge Compute

Losio: I was actually wondering, thinking about our audience, as well about developers and software engineers, how do you approach the cloud in the long term, from the cloud to be maybe a cloud architect, cloud engineer, whatever, to be an edge expert? Do you see the need of edge experts? Is it going to be a specialty somehow? We’re going to have a new title, maybe it already exists, edge engineer, I have no idea. Or you see as kind of, from a software developer point of view, almost transparent that it’s just ok, you deal with it, and someone else deals with the location of your data, Tim, from the graph point of view?

Suchanek: I think that we probably would not need new job descriptions there. There are a few things unique to the edge, for example, interconnectivity, but usually, as Luca also mentioned earlier, the abstractions that you get these days are so good, that I can “just write a function and upload it” and the providers usually take care of it. Then rather advanced scenario, I might have to know about that, but then we’re in distributed systems anyway. I think, generally, with these JavaScript interfaces that are available these days, Cloudflare Worker, and so on, Deno also has an edge product, you don’t really need to be that specialized.

I also want to add one thing regarding vendor lock-in. There is an important initiative happening right now called WinterCG. Where basically all the major edge compute providers came together and said, let’s agree on a minimum API that we want to support.

Losio: I’ll ask Luca if he shares the same view, that we don’t need special developers.

Bianchi: I don’t agree with Tim, because we are seeing that in some domain especially related to machine learning, people are naturally specializing into bringing models down to the edge. For instance, we are using AWS Panorama, and it requires a lot of effort. Some people of my team had to study and specialize into compiling the models directly for the hardware. Maybe in the future, the abstraction will be more mature, so we’ll be able to have the same developer develop the same thing for the edge and for the cloud. Actually, I think that some effort is required.

Losio: If I simplify a bit what you said, it’s basically, if you develop a Lambda function, probably you don’t care where we deploy it. Maybe you can deploy it at the edge or you deploy it at a standard region or whatever. If you deploy a machine learning model, and it depends really a lot on the hardware, then if the hardware at the edge is different like Panorama or whatever, then you have a different challenge.

Bianchi: Yes. Exactly.

Pachinger: It depends also maybe on the industry there. A classic example is like IT/OT conversion, where OT, operation technology from manufacturing context, they would like to integrate more with cloud providers, with the classic IT. From like layer 1 network perspective, up to layer 7, of course, to the application perspective, collect data, again, also leverage machine learning there and gather the data to the cloud. This is exactly where, of course, like a specific knowledge is important. How do the machines operate? What are the network requirements? In this way, there’s definitely expertise needed. If you talk about, I can just use the Lambda, I can just use some cloud native technologies there, I don’t care about the specifics at the edge, then like, not everybody, but in a way, a classic standard cloud developer or developer can definitely handle that.

See more presentations with transcripts

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Here’s How Your Trade MongoDB Inc. (MDB) Aggressively Right Now – News Heater

MMS Founder

Posted on mongodb google news. Visit mongodb google news

MongoDB Inc. (NASDAQ:MDB) went down by -6.08% from its latest closing price compared to the recent 1-year high of $590.00. The company’s stock price has collected 0.70% of gains in the last five trading sessions. Barron’s reported on 06/24/22 that Next Big Issue for Software Stocks Will Be Sharp Cuts in Earnings Estimates

Is It Worth Investing in MongoDB Inc. (NASDAQ :MDB) Right Now?

Plus, the 36-month beta value for MDB is at 0.95.

3 Tiny Stocks Primed to Explode
The world’s greatest investor — Warren Buffett — has a simple formula for making big money in the markets. He buys up valuable assets when they are very cheap. For stock market investors that means buying up cheap small cap stocks like these with huge upside potential.

We’ve set up an alert service to help smart investors take full advantage of the small cap stocks primed for big returns.

Click here for full details and to join for free.


MDB currently public float of 65.61M and currently shorts hold a 5.40% ratio of that float. Today, the average trading volume of MDB was 1.51M shares.

MDB’s Market Performance

MDB stocks went up by 0.70% for the week, with a monthly jump of 6.27% and a quarterly performance of 32.80%, while its annual performance rate touched -16.06%. The volatility ratio for the week stands at 5.17% while the volatility levels for the past 30 days are set at 4.70% for MongoDB Inc. The simple moving average for the period of the last 20 days is -5.71% for MDB stocks with a simple moving average of -11.84% for the last 200 days.

Analysts’ Opinion of MDB

Many brokerage firms have already submitted their reports for MDB stocks, with Robert W. Baird repeating the rating for MDB by listing it as a “Outperform.” The predicted price for MDB in the upcoming period, according to Robert W. Baird is $360 based on the research report published on July 13th of the current year 2022.

Redburn, on the other hand, stated in their research note that they expect to see MDB reach a price target of $190. The rating they have provided for MDB stocks is “Sell” according to the report published on June 29th, 2022.

Needham gave a rating of “Buy” to MDB, setting the target price at $350 in the report published on June 10th of the current year.

MDB Trading at 5.88% from the 50-Day Moving Average

After a stumble in the market that brought MDB to its low price for the period of the last 52 weeks, the company was unable to rebound, for now settling with -43.72% of loss for the given period.

Volatility was left at 4.70%, however, over the last 30 days, the volatility rate increased by 5.17%, as shares surge +4.42% for the moving average over the last 20 days. Over the last 50 days, in opposition, the stock is trading +41.30% upper at present.

During the last 5 trading sessions, MDB rose by +0.70%, which changed the moving average for the period of 200-days by -40.16% in comparison to the 20-day moving average, which settled at $352.88. In addition, MongoDB Inc. saw -37.27% in overturn over a single year, with a tendency to cut further losses.

Insider Trading

Reports are indicating that there were more than several insider trading activities at MDB starting from Porter Mark, who sale 1,520 shares at the price of $325.00 back on Aug 02. After this action, Porter Mark now owns 29,121 shares of MongoDB Inc., valued at $494,000 using the latest closing price.

MERRIMAN DWIGHT A, the Director of MongoDB Inc., sale 1,000 shares at $310.68 during a trade that took place back on Aug 01, which means that MERRIMAN DWIGHT A is holding 540,896 shares at $310,679 based on the most recent closing price.

Stock Fundamentals for MDB

Equity return is now at value -47.90, with -13.50 for asset returns.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.

Podcast: Swyx on Remote Development Environments and the End of Localhost

MMS Founder
MMS Shawn Swyx Wang

Article originally posted on InfoQ. Visit InfoQ

Subscribe on:


Welcome to the InfoQ podcast [00:04]

Daniel Bryant: Hello, and welcome to the InfoQ podcast. I’m Daniel Bryant, head of Developer Relations at Ambassador Labs and News Manager here at InfoQ. Today I have the pleasure of sitting down with Shawn Wang, Head of Developer Experience at Airbyte. Shawn also goes by the name Swyx on the internet, that’s Swyx, and I’ve been following his work for a number of years. 

His recent blog post entitled “The End of Localhost, explored local development environments and the potential move to remote developments”. And this caused quite a stir in the developer communities. I wanted to chat to him more and explore the topic in more depth.

In this episode of the podcast, we cover everything from the topic of local development environments to the exploration of hybrid and remote development. And of course, the future of IDEs.

Introductions [00:42]

Let’s get started. Hello, Swyx, and welcome to the InfoQ podcast. Thanks for joining us today.

Shawn “Swyx” Wang: Thank you for having me. I’m excited to chat.

Daniel Bryant: Could you briefly introduce yourself for the listeners, please?

Shawn “Swyx” Wang: My name is Shawn, I also go by Swyx. I’ve done a number of roles since joining tech. I used to be a trader here in London actually, where I’m recording right now, but I pivoted to tech not too long ago.

And I’ve worked at Netlify, Two Sigma, AWS, Temporal, and I’m currently at Airbyte. Basically just working on developer relations and developer experience, always working for a developer tools company. And on the side, I do some blogging, which is why I’m here.

Could you set the scene for writing your recent blog post “The End of Localhost: All the Cloud’s a Staging Environment, and All the Laptops Merely Clients?” [01:17]

Daniel Bryant: Perfect. Yes, that’s why I’ve been following you for many years, for your blogs. And I think you and I have led similar paths. I mean, I haven’t done the trading, but we’ve led similar paths in the software space. We could talk for hours on developer experience, developer relations, many things.

But in particular, I was really interested by your recent blog post, The End of Localhost. And that’s what we’re going to frame the podcast around today. I think so many great insights for folks from all different angles, whether you’re a developer, an operator, or whatever your persona is. So for folks who haven’t read the blog post, could you summarize perhaps the core premise of why you put together what was called, The End of Localhost: All the Cloud’s a Staging Environment, and All the Laptops Merely Clients? Great title.

Shawn “Swyx” Wang: I have a little bit of pretend Shakespeare literature bent around me. So if I can see a reference, an opportunity to squeeze it in, I will.

The premise of this, let me just start at the inciting incidents. I have been basically skeptic of the cloud development environments for the past few years. I’ve seen the rise of CodeSandbox, I’ve seen the rise of GitHub Codespaces. And I’m like, Yes, good for small things like testing stuff out, but you’ll never do anything serious in these. And then I started to see people like Guillermo Rauch from Vercel saying he no longer codes on a laptop, he just has an iPad. And that starts to really shift my gears.

But the inciting incident for this particular blog post where I was really starting to take it seriously, was Sam Lambert, who’s the CEO of PlanetScale, saying that PlanetScale doesn’t believe in localhost on a podcast. I don’t remember which podcast it was, I think it was Cloudcast or something like that? I put a link in the blog post. But for a dev tools company to come out and say that they don’t care about localhost is a bolder statement than I think most people realize, because for most people a proper development experience must include a local clone of whatever the cloud environment is doing so you can run things locally without network access.

And this is exactly what I worked on at Netlify. I worked on Netlify Dev, which is a local proxy that compiles Netlify’s routing services down to the CLI. At AWS we spend a lot of time building the AWS Amplify CLI, which does clone a few things. It’s not a complete clone because it’s very hard to clone AWS down to the local environment and that’s where you start to see the issues. And mostly what is Docker Compose, but a way to locally clone your production cluster onto your local desktop. And you can see it’s not a very high fidelity local clone, particularly as you start to use more and more cloud services.

And so the assertion is that this is a temporary phenomenon. This is not the end state of things, because all that we have to do and where every single cloud environment is trending, is that you should have branches. It should be cheap to spin them up and spin them down. You should not treat them like pets. You should treat them like cattle, which means if you want to spin something up for a few seconds, go ahead. It doesn’t cost much. You can do it without a second thought. And so why are you wasting any time maintaining the differences or debugging differences between dev and prod when you can just have multiple prods and just swap between them?

So that is the premise of the debate. That was a personal journey that took a few years. And I sort of wrote it up in a tweet. And I was like, I want to get people’s opinions. I know this will be slightly controversial, but I was very taken aback by how strongly opinion was. Obviously devs have very strong feelings about their development environment. And obviously it’s the main tool of our trade, so of course, but it really split ways in terms of whether or not people use significant cloud services or they don’t use significant cloud services or they don’t see much benefit, and whether or not people have experienced the pain with maintaining different environments.

So one of my favorite lines, I really love this phase from Bob Metcalfe, was quoted by Marc Andreessen. The browser reduced operating systems to a poorly debugged set of drivers. The browser is essentially a better operating system for apps than the operating system itself, because it’s a much easier application delivery mechanism. So if the cloud is doing the same thing to dev environments, then the cloud will reduce the dev machine to a poorly maintained set of environment mocks. We’ll never have full fidelity to the production environment just because we don’t have secrets or we don’t have the right networking setup. We don’t have to write data in place. And so anytime spent debugging dev and prod differences is time wasted as long as you can get cloud good enough. So let’s go get cloud good enough.

What’s the benefit of a dev environment being able to travel with you? [05:29]

Daniel Bryant: Perfect. That’s a great intro. And there’s so much to break down. I encourage listeners to read the full blog post and I’ll definitely link it in the show notes because it’s a fantastic read, straight up. I read it about 10 times now and I take away different things every time. It is a monster blog post, it’s fantastic.

But a few things caught my eye and I’d love to dive into them for the listeners in a bit more detail. You’ve touched on a little bit there, but the ultimate developer wishlist stood out for me, as in like, what do we want as developers? I think back to my days, many Java development, did a bit JavaScript, did do a bit GoNow. And three things really stood out to me and I’d like to work through them and get your opinion on them now. 

The first one, and you hinted at that with the iPad development, but you said your personal dev environment travels with you no matter which device you use. And obviously there’s GitHub Codespaces, GitPod, doc and dev environments. There’s a bunch of tools out there. I’d love for you to break that down for us. What’s the benefit of that dev environment traveling with you, the stop/start maybe of the dev environments? I’d love to hear your thoughts of why that was a wishlist item.

Shawn “Swyx” Wang: Oh, preface this with why I like to start with the wishlist, why I started this blog post with a wishlist. So some people might go directly into what remote dev environment is. But I like to start from the problems rather than the solutions, because solutions will come and go, but the problems remain. And as long as we can set a long term goal of what we actually want in an ideal world, we can kind of work our way backwards, to how we get there. And so that’s why I started with this wishlist. And I think the dev environment traveling with you is kind of a luxury, but also it’s a productivity thing. Like all your bash aliases, all your CLIs that you always use. If you use a different version of, I don’t know, like what is trendy these days? Like FZF or all these sort of command line utilities that you know of, but are not fully distributed yet, but you just want to use it everywhere you go with you.

Like my favorite is Z, the little command line called Z, that remembers every folder you’ve been in and does a sort of frequency matching so that you can just jump back and forth between folders, just by typing a partial match of the folder name. Like all those little utilities that increase your productivity, you want to have them everywhere that you code. And sometimes you don’t have access to your machine, whether you’re traveling or using a coworker’s laptop or you’re, quote unquote, SSH’ing into a remote environment. And you’re trying to debug something and you just don’t have the utilities that you’re used to. So now you spend some time writing lower level scripts that you would have put together in a macro, in a previous environment. So it just takes so much time to set up and people have built all sorts of tooling for this.

I think Spotify’s Backstage maybe comes to mind. Netflix also has a sort of boots shopping tool that they use internally, all sorts of companies have this company dev environment. But then there’s also an element of personalization that makes your dev experience yours, rather than the one that’s prescribed to you by some company or your employer. And I think that is an ideal that we try to reach. We may not ever reach that because it’s hard to basically teleport the whole machine regardless of any hardware. But I think we have enough generic tools and interfaces that we could possibly do 90% of that to work. One device or one tool that has made significant progress, that you did mention, was actually, VS Code implemented Settings Sync. It actually used to be a user land plug in that would post your settings to a GIST and then you’d have to download the GIST and do all sorts of funky gymnastics, but VS Code just built it in and it just works. And it downloads the extensions that you always use so you get your intelligent suggestions and it just works.

And I think that is something that is improving developer productivity as they move machines. So I think it’s a luxury, maybe on the scale of things it’s not as important as the other stuff, but I just chucked it in there because there’s some things that, if you pick the right solution, you get a bunch of these for free together.

Can you dive into the motivations and reality of being about to spin up a dev environment on-demand? [09:04]

Daniel Bryant: Let’s dive into the second item I pulled up. It is any apps environmental dependencies, everything from an HTTPS certificate to a sanitized sandbox fork of production databases are immediately available to any teammate ramping up to contribute any feature – no docs, no runbook. And I can totally, I haven’t done a bunch of microservice work, empathize with this. Not only is like TLS cert’s an issue, but even other services, databases, this I think is a big one. I’d like to get your take on this wishlist item.

Shawn “Swyx” Wang: So first of all, I appreciate that you called it the TLS cert. Because I actually debated whether I should say TLS cert or HTTP cert and I settled on HTTPS because that’s the thing that most people see. But it’s always confusing to have two names for that process. I’m a guy who, I care about docs. I think it’s a mark of a good developer to write docs and to write runbooks. At the same time I know that people ignore them, or people skip steps, or they are badly written and you just can’t quite follow them. And that’s also very frustrating. And ultimately the best docs is the docs you don’t have to read. And that is a product level improvement that you kind of have to make there. But I do see that a lot of the cloud providers and cloud setups that are out there are trending towards this place, where again, this is an outcome of treating your environment like cattle, not pets.

All these should be commoditized things and immediately available. It should not be like one of these forks or one of these certs per developer or per organization or per team or per feature. You should have multiple of these. You should have 10 of these simultaneously running if you want to. And why not? So that’s why I said any teammate ramping up to contribute any feature. You should be able to work on multiple features at the same time and not have any conflicts between them and to have a relatively high fidelity fork of whatever you have in production. And something I mentioned about the sanitized fork of the production database also matters a lot. There’s just some companies are working on this, which is protecting PII information. And I think these are difficult problems, but they’re not unsolvable problems. And there are companies working on this.

And it’s easy to see a future in which some standardized version of this is essentially solved. It will never be solved forever because data is complex and heterogenous and difficult. But on some level we can probably have some version of this future where environments are truly disposable, truly ephemeral, truly just high fidelity as possible. And I don’t think there’s that much of a difference from doing that. I don’t think whatever improvements we can make in Localhost cannot match that. Whatever we do in production, being able to thwart that is always going to be a superior alternative. I’m trying to express something, but I don’t have the words for it. Like something to code gains. The word that comes to mind is a substrate. In environment to code against, you want to have a high validity environment to code against as much as possible to your production rather than making it easy to run locally. And that’s, I think, what we achieve with this vision.

Daniel Bryant: Yes, I love it. And it brought back some memories. I remember IT did a bunch of Ruby on Rails and we had some really old services running on an old version of Ruby. And I had to use RBM locally to manage my machine of different versions of Ruby, different dependencies, like GEM. The bundles were a nightmare. And then the other day I fired up GITPod, just a name check GI Bob. And I was working on two branches in two different browser tabs of the same project. And I was like, whoa, that was a little bit of … And I’m sure you can do this with other remote dev experiences, not just GITPod, but that blew my mind, to fiddling around with RBM, all my local tools, to having two environments in two separate tabs on the same machine. I was like, that’s amazing, right?

Shawn “Swyx” Wang:  To me it’s just really about all these things that can be commoditized, you should never have to worry about it. It should just be a part of the workflow as ubiquitous is GIT. GIT is forking code and whatever this cloud thing is, this cloud development is, it’s just forking your environment. And I think that’s a very expansive vision and that papers over a lot of infrastructural work that needs to be done, but it’s going to be done because in terms of vision, that is the best way. The alternative is leave things as the status quo and it is not super productive.

Maybe one more thing I’ll offer, and this is something I’ve been thinking about a lot, which is I’ve been talking to the Deno guys, Ryan Dahl, and the people working on Deno Deploy and also the Cloudflare people, CloudFlare workers and workers for platforms, and essentially there’s an intermediate tier that’s emerging between clients and server. And that is serverless, client server serverless. And the serverless functions are kind of an ephemeral tier of compute that are easily forkable, that are trusted, but Sandbox in a way that you can run untrusted code on it. It’s just a very interesting environment where you can get very, very close to production-level infrastructure in a fork. 

So I feel like there’s a very strong empathy in the service movement with this movement, which is, it should be able to spin up these environments easily. And obviously the easiest one to spin up is compute but then now that trend is moving to databases. And I think more and more of the infrastructure primitives should become serverless, air-quote serverless, in that way, where it’s cheap and easy to spin things up.

What is your opinion on using local emulators of remote services for testing? [13:50]

Daniel Bryant: I remember, myself, I’ve played around with AWS Lamda and the SAM environment and I was using LocalStack to emulate some of the things. And for some use cases, it worked really well. And for other use cases, to your point, the emulators really showed they were emulators.

Shawn “Swyx” Wang: They’re always going to trail behind. These guys have put a lot of work into it, but it’s kind of a losing battle or a Sisyphean battle is the Greek metaphor I would use. That you’re always going to trail behind and maybe you should just stop trying.

Do you think it is currently possible to scale from coding an MVP app in your bedroom to a Unicorn within a matter of weeks? [14:14]

Daniel Bryant: I think that it’s great. We can first dive into that a bit more in a minute because Yes, I totally get it. I totally get it. The final wishlist item I wanted to pick up on, because I think it frames the rest of the discussion perfectly. Because I think this is maybe a bit controversial, but I think it’s great. 

You said, you can scale up MVP to Unicorn in weeks using one of the serverless or new Heroku-like platforms with off payments, databases, communication handled by world class SAS teams. And that’s the vision, going from one person in their bedroom to a multi Unicorn, right? Do you really think that’s possible?

Shawn “Swyx” Wang: I think that’s possible. I think that should be possible, that there’s something that we want to get infrastructure to a point of doing, but I respect that not everyone will want this actually. I think you should have the option to have it. You should have the technologies available to you if you want it. But at the same time, large companies will always want to have their own platform by which you run everything through. So I view the world of SAS and all these sort of infrastructure as a service companies, I view them as net additions.

You could roll it yourself. You would take a long while, you would rediscover all the foot guns that everyone else has discovered ahead of you and you would take a lot more time, but you would have full control of it. So there is some level at which it makes sense to make that bargain of, I will give over control to someone who knows more than me, I’ll pay them money, I’ll exchange fixed costs up front for variable costs that I know is higher in the long run, but total cost of ownership is lower. I’ll do all those trade offs for my size and until such time as I need to bring it in-house, I will bring it in-house. But I think these are net additions. I think that is a positive. We just have to be very clear about what actual primitives are additive to us and what are just really very thin combinations of other primitives that we should probably just bring in-house anyway, because they don’t add that much value.

It’s one of those difficult things to really make a judgment of because as someone who’s not a domain expert, it’s always very easy to under appreciate like, oh, you’re just a crond service, why do I need you? And then figure out how unreliable crond skill can be. So I think it does take some experience. I don’t necessarily think that this is the biggest point for me, but I do think that this is a litmus test of how good cloud services are and if we’re not there yet, maybe we should go build what’s missing. And to me, as someone who’s into dev tools in the startups, I’m always looking for the negative space where the existing set of solutions are not well addressed yet.

Can you walk us through some local development environments and developer experiences that you’ve encountered? [16:31]

Daniel Bryant:  I love it. I’m the same, working in tools face as well. That totally makes sense, looking for the gaps, looking for the things to join it all together. Love that, love that.

Let’s move on a little bit now because you’ve got some great points I wanted to look into there. I was really curious to dive into existing solutions already out there. The next bit of the blog, you said, hey, Google, FAANGs, already do a bunch of this stuff and you and I were talking off mic. We know not everyone’s in Netflix, we know not everyone’s in Google, but often they can show us where the future might be, or there’s something interesting there. So my general experience, when I chat to people, when I chat to a lot of folks in this space, they don’t know how other companies develop.

It’s not a thing developers talk about. They talk about architecture, but they don’t necessarily talk about local dev setup. So I’d love for you to give us a bit of insight into what have you seen as you’ve looked around at dev environments, you looked at Google, you mentioned Slack, you mentioned a bunch of other folks in the blog post. I’d love to get your take on where is the vanguard, if you like? Where are they in terms of local or not local dev experience?

Shawn “Swyx” Wang: I want to preface this with, I haven’t actually worked at any of these companies that I’m about to mention. The only big co I worked at is Amazon. And Amazon did not have a, we had one but we didn’t really use it internally. I think it was, I forget the name of it. Cloud Nine?

Daniel Bryant: Oh Yes, I know it. It got acquired, right?

Shawn “Swyx” Wang: But we didn’t really use it internally for anything development wise. But anyway, so the list I had and by the way, this is one part of how I blog, which is, I’ll tweet out the rough direction I want it to go. And then if you have enough of a relationship with your readers, they’ll contribute examples to you. So I didn’t know all this when I started out, but Google, Facebook, Etsy, Tesla, Palantir, Shopify, Slack and GitHub, all chimed in with their versions of whatever the FM or workspace cloud development environment is. And at some point in their lifespan, and some of them have talked about it publicly and some of them have not. Like Tesla was not public about this, but apparently, I got one tweet that said for vehicle OS development, they moved from local to cloud because it was too expensive to run locally.

At some point it makes sense for those people, especially if they have a lot of proprietary infrastructure, if they want to restrict what their developers do or they want to provide tools that cannot be provided locally, all those things are worthwhile investments for them. And I just think it’s really interesting. So I’m very much of the opinion that we should not cargo cult, which is, oh, Netflix did it. Netflix is successful therefore doing cloud development environments makes you successful. That’s kind of not the way to think about this because that is the path to madness. But the way I look at technology innovation diffusion is that it usually starts at the big players and then it trickles down.

And so what I pay attention to is, when people leave, I have a few friends at Facebook, I have a few friends at Shopify, when people leave, what do they miss? And so the Facebookers talk about Facebook on Demand and Facebook, local dev doesn’t exist. And that is just such a stark contrast that is immediately compelling. But that’s one thing. The second thing that you want to look out for is, does it still make sense at the individual level? Is this something that you only do because you have a team of 10,000 people? Or does it make sense for three people? Does it make sense for one person? And I think this concept of cloud development environments or end of Localhost, I think it applies for one person as well because I want to work on multiple things at the same time.

So in other words, right now, there’s a lot of investment in proprietary tooling at the big co’s because they can afford it. Eventually, some of these people will leave. Actually some of them are, I have a list them at the end of the post, and then work on spreading this technology to everyone else. So that’s kind of how innovation works. It starts proprietary and then it gets productized and commoditized. And I think that we’re in the middle of seeing this happen.

Are you a fan of Wardley Maps? [19:58]

Daniel Bryant: Perfect. Are you a fan of Wardley Maps? Sorry, I’m just thinking of all the things you mentioned there.

Shawn “Swyx” Wang: Yes.

Daniel Bryant: I love Simon’s work and you can literally see things going, there was that genesis to product, then commodity. Have you tried to map out, at all, the dev tooling space?

Shawn “Swyx” Wang: Oh dear. I have one. I actually have a separate post called, The Four and a Half Kinds of Developer Platforms. And my map looks very different from Wardley mapping. I think Wardley Maps do play a lot of significance in my thinking. The problem is they tend to look like conspiracy theory maps, the always sunny in Philadelphia meme. And I’m like, this is going to make people laugh at you rather than come along with you. 

So I tend to keep my dimensionality simple, two by two matrices or some kind of heroes’ journey type of storytelling. But Yes, I look at things in terms of money spent, the way that industries move together or separately. So what I have is, four and half kinds of platforms would be application platforms, infrastructure platforms, developer platforms, which is the one we’re talking about today, and then the fourth one would be data platforms.

Daniel Bryant: Oh, interesting.

Shawn “Swyx” Wang: So people working on data engineering. And I feel like web developers historically underestimate the amount of data engineering that is going on. And that has been the biggest blind side for me in just catching up in all of them. And the final half platform is the platform of platforms that eventually emerges at all of these companies. So particularly things like logging. You will need to log all the things and feed information from one place to the other, building off-end and obviously the singletons in the company that naturally emerge because they need to be the central store of all data, all information that is relevant to them. It’s an open question. So I used to work at Temporal. It’s an open question whether workflow engines count as a platform of platforms because they are used in both infrastructure and applications. And I think maybe that is something that should not be encouraged. Maybe we should have separate engines for those of them, because it’s very tricky to commingle these resources together. But yes, I think it’s an emerging area of debate for me, but I have mapped it out and that’s the TL;DR.

What feedback have you received on your blog post? Are developers ready to give up localhost yet? [21:59]

Daniel Bryant: That’s awesome. And that’s another podcast we can bring you in for, is to cover those platforms because that sounds fascinating. I’ll definitely be checking that blog later on. So I know we’re sort of coming close-ish to time. I want to leave a bit of a gap here. You and I talked before about addressing the feedback you’ve got from this post, both the good feedback and the negative feedback. Because to your point earlier, that some folks really do love their local dev environments. They do treat them like pets and I’ve been there in the past, so I would love to get your take now on the feedback you’ve got when you put the tweets out there, you put the blog out there. Was it predominantly good feedback you got, predominantly bad? I’d love your take on it.

Shawn “Swyx” Wang: I think it’s interesting. The thought leader types love it, and then the anonymous types don’t. If I could sum up the reactions …

Daniel Bryant: That’s awesome.

Shawn “Swyx” Wang: … it does trend to that way because I notice that the people that were positive, Jonno Duggan, Erik Bernhardson, Kelsey Hightower, Danny Ramos, Simon Willison, Paul Biggar, Patrick McKenzie. I can say these names and I don’t have to introduce them. They’re thought leader types. All of them were positive. And then the people I would have to introduce, or I don’t even know their bio, they were saying things like, you can pry my Localhost from my cold dead hands. This is the final step in the road to the inescapable surveillance dystopia. General purpose computation on your own machine is probably going to be illegal in 20 years. It will be our greatest accomplishment if we can liberate even 1% of humanity from this soul stifling metaverse. So really cutting, really brutal. But even that last comment was agreeing, saying, this is probably inevitable, we just don’t like it.

And I think there’s two questions. One is, should you like it? And two is, is it inevitable, whether or not you like it? So I think the two levels to debate, maybe the second level is just, who knows? Who knows if it’s actually inevitable, only history can tell. All we’re doing here is we’re observing some trends. Everything is trending in one direction. Maybe it’ll continue, maybe not. Should we like it? That is the bigger question. And I think it’s reasonable to want more control. It’s always reasonable to want privacy. And I think that’s why this tweet or this post did well on Hacker News in terms of up votes, but the comments just tore it to shreds.

Daniel Bryant: Brutal. Yes.

Shawn “Swyx” Wang: Because Hacker News, out of all the communities, loves privacy, loves open source, hates proprietary services. So I think that is entirely reasonable. And then the other thing to point out is, if you are a thought leader type, you probably work for a large vendor or you are a founder. So you are trying to provide proprietary services. And so you have a vested interest in encouraging people that, hey, the cloud will not harm you or the cloud is a TCO win or whatever the choice of terminology you favor. It’s really up to you what your value system is. I think that my north star is, am I more productive and how much time am I spending on things I don’t really want to spend time on, it’s just incidental complexity? And what kind of apps do I want to develop? And my universe of apps is increasingly more infrastructure centric, more data driven than other types of apps.

If you’re just the front end dev, you take markdown, you transform into HTML and then, that’s about it, then go ahead, be my guest. You can do everything Localhost. Go code on your planes, go code in your mountain cabins, I don’t care. But if you use any sort of significant cloud services, you will not be able to mock some percentage of them. And even if you deploy significant cluster services, you can’t run that on your local machine. And so there’s just a lot of reasons where you’re just running into issues with development. And I think for the vast majority of people trying to make money, doing big things with technology, that’s what they’ll be concerned about. So I’m focused there.

Is there anything else we haven’t covered that you really want to point listeners to and focus on in the blog post [25:21]

Daniel Bryant: So we’re getting to the end of our time here, but I wanted to ask if there is anything else we haven’t covered that you really want to point listeners to and focus on in the blog post?

Shawn “Swyx” Wang: There is one more nuance between the inner loop and outer loop where the cloud has basically already eaten the dev outer loop. So now we’re just talking about whether the cloud is eating the dev inner loop. So I encourage you to read more on the blog post there.

Daniel Bryant: So the inner and outer dev loop is super interesting and probably that’s a whole podcast we can do in the future. So Yes, because I’ve done a lot of thinking in that space as well. And that’s been a fantastic tour de force of the potential end of Localhost. Fantastic. If folks want to find out more, where’s the best way to engage with you, Swyx? On Twitter, LinkedIn, via your DX Tips site? Let the folks know.

Shawn “Swyx” Wang: Actually, intentionally, I don’t have a LinkedIn. This has been a sticking point for recruiters because they’re like, we need to hire people onto your team and people want to look you up. So I said, basically, try not to do CRUD data entry for a $26 billion company that turns on and sells your information. Literally, that’s all they do. You hate Google and Facebook doing that, so why are you doing it for free on LinkedIn? Anyway, so you can reach out to me on Twitter or DX Tips. DX Tip is the new, dedicated blog that I spun out for my writing. I’m hoping for it to be maybe a baby InfoQ.

Daniel Bryant: Ah, interesting, interesting competition.

Shawn “Swyx” Wang: I don’t have inhibition. I needed to split out my personal reflections from my professional reflections. And I thought that there was enough of an audience that I could do a dedicated one. I was more inspired by CSS-Tricks. So Yes, sorry. To cut a long story short, you can reach out to me there.

Thank you for joining us today! [26:46]

Daniel Bryant: This has been awesome. It’s been a great chat we’ve got in the can here. And thank you very much for your time.

Shawn “Swyx” Wang: Thanks so much, Daniel. It was a pleasure.


About the Author

From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.