Month: August 2024
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Datadog (NASDAQ:DDOG), MongoDB (NASDAQ:MDB) and Snowflake (NYSE:SNOW) were in the spotlight on Monday as Bank of America said that cloud consumption data showed a rebound in July after a “weak” June.
“Across the cloud consumption group, average engaged visits improved during July, following a soft June (engaged visit +10.4% [year-over-year] up from -1.4% in June),” analysts at the firm wrote in a research note. “In addition, [month-over-month] growth of +6.6% is well above seasonality of -2.8%, which is calculated as the trailing [three-year month-over-month] average.”
For Datadog, engaged visits grew 5.2% year-over-year in July, up from a 9.2% decline in June. Additionally, visits grew 9.5% month-over-month, ahead of the normal seasonal decline of 1.7%. “The accelerating growth (+10% m/m, from +1.2% in June) is positive data point that the healthy demand seen in 2Q24 results is likely continuing,” the analysts wrote.
For MongoDB, engaged visits likely grew 0.9% year-over-year in July, the analysts said, citing data from Web Market Analysis. “We are hopeful that solid July activity is an early sign that macro pressure, which impacted the disappointing FY25 outlook, is easing,” the analysts wrote.
For Snowflake, total page view growth in July was up 16.4% year-over-year, “up significantly” from the 1.2% decline seen in June. Additionally, the 13.3% rise month–over-month is “well above” the normal seasonal decline of 1.7%, Bank of America’s analysts said. “The June inflection is encouraging, perhaps suggesting that new product cycles (Cortex, Artic) are emerging.”
The firm has a Buy rating on Datadog and MongoDB and a Neutral rating on Snowflake.
The firm also weighed in on Microsoft (MSFT) and Adobe (ADBE), adding that both Microsoft 365 and Adobe 365 saw better month-over-month- activity in July.
Microsoft 365 engaged visits grew 31% year-over-year in July, up from 18.4% in June, and up 4.7% month-over-month, well above the seasonal decline of 6.2%. “We are hopeful this indicates easing SMB macro pressure, which has affected commercial Office additions in recent quarters,” the analysts said.
Adobe’s Creative Cloud saw a 34.5.% year-over-year decline in engaged visit growth in July, better than the 44.6% decline seen in June. The month-over-month decline of 1% is above the normal seasonal decline of 9.2%.
“The data point could be an early indicator of traction with Firefly, which is increasingly embedded into key creative cloud offerings such as Photoshop and Lightroom,” the analysts added.
Bank of America has Buy ratings on Microsoft and Adobe.
More on cloud computing
Article originally posted on mongodb google news. Visit mongodb google news
Gagandeep Singh Nanda Acquires Position as National Sales Director for the Government …
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
MongoDB is well-known for its high-performance and scalable database solutions.
MongoDB India has appointed Gagandeep Singh Nanda as National Sales Director (Government). Gagan comes to the firm from Zscaler, where he worked for more than four years as a Security Service Edge (SSE) and Zero Trust Network Access (ZTNA) specialist. He was instrumental in propelling Zscaler’s expansion and solidifying the company’s position as a pioneer in cybersecurity solutions throughout his tenure there.
Gagan, who has worked in the business for more than 16 years, will now concentrate on using MongoDB’s document-oriented NoSQL database architecture to improve government processes. The platform of MongoDB is made to satisfy the needs of contemporary applications, such as those involving AI/ML, real-time analytics, vector search, time graph data, and geographic needs. Gagan has emphasized how important it is for government services to undergo a digital transformation in order to improve citizen-centric services across a range of ministries and agencies.
Gurgaon, Haryana-based MongoDB is well-known for its high-performance and scalable database solutions. The company provides services for a wide range of applications, meeting the demands of contemporary businesses in many industries.
Also read: Achieving Rapid Outcomes with AI-Driven Cloud Analytics
Do Follow: CIO News LinkedIn Account | CIO News Facebook | CIO News Youtube | CIO News Twitter
About us:
CIO News is the premier platform dedicated to delivering the latest news, updates, and insights from the CIO industry. As a trusted source in the technology and IT sector, we provide a comprehensive resource for executives and professionals seeking to stay informed and ahead of the curve. With a focus on cutting-edge developments and trends, CIO News serves as your go-to destination for staying abreast of the rapidly evolving landscape of technology and IT. Founded in June 2020, CIO News has rapidly evolved with ambitious growth plans to expand globally, targeting markets in the Middle East & Africa, ASEAN, USA, and the UK.
CIO News is a proprietary of Mercadeo Multiventures Pvt Ltd.
Article originally posted on mongodb google news. Visit mongodb google news
Java News Roundup: JDK 23 RC1, New HotSpot JEP, Hibernate and Tomcat Releases, GlassFish 8.0-M7
MMS • Michael Redlich
Article originally posted on InfoQ. Visit InfoQ
This week’s Java roundup for August 5th, 2024 features news highlighting: the first release candidates of JDK 23 and Gradle 8.10; JEP 483, Ahead-of-Time Class Loading & Linking, a new HotSpot feature; the releases of Hibernate ORM 6.6, Hibernate Search 7.2, Hibernate Reactive 2.4; multiple Apache Tomcat point and milestone releases; and GlassFish 8.0.0-M7.
OpenJDK
JEP 483, Ahead-of-Time Class Loading & Linking, has been promoted from its JEP Draft 8315737 to Candidate status. This JEP proposes to “improve startup time by making the classes of an application instantly available, in a loaded and linked state, when the HotSpot Java Virtual Machine starts.” This may be achieved by monitoring the application during one run and storing the loaded and linked forms of all classes in a cache for use in subsequent runs. This feature will lay a foundation for future improvements to both startup and warmup time.
JDK 23
Build 36 of the JDK 23 early-access builds was made available this past week featuring updates from Build 35 that include fixes for various issues. Further details on this release may be found in the release notes, and details on the new JDK 23 features may be found in this InfoQ news story.
As per the JDK 23 release schedule, Mark Reinhold, Chief Architect, Java Platform Group at Oracle, formally declared that JDK 23 has entered its first release candidate as there are no unresolved P1 bugs in Build 36. The anticipated GA release is scheduled for September 17, 2024. The final set of 12 features for the GA release in September 2024 will include:
More details on all of these new features may be found in this InfoQ news story.
JDK 24
Build 10 of the JDK 24 early-access builds was also made available this past week featuring updates from Build 9 that include fixes for various issues. Further details on this release may be found in the release notes.
For JDK 23 and JDK 24, developers are encouraged to report bugs via the Java Bug Database.
GlassFish
GlassFish 8.0.0-M7, the seventh milestone release, delivers notable changes such as: removal of throwing an IllegalArgumentException
if an instance of the BundleDescriptor
class is null
while executing the toString()
method defined in the Application
class; removal of additional references to the deprecated SecurityManager
that included formatting, name changes, and removing unused method parameters; and an implementation of the Jakarta Concurrency 3.1, the latest version for the upcoming release of Jakarta EE 11. More details on this release may be found in the release notes.
Quarkus
Quarkus 3.13.1, the first maintenance release in the 3.13 release train, provides bug fixes, improvements in documentation and notable changes such as: support for CompletableFuture
when using JsonRPC extension in the Dev UI; avoid a possible NullPointerException
due to a race condition in the ApplicationLifecycleManager
class during shutdown; and a resolution to a NullPointerException
when using the findFirstBy
methods defined in Spring Data JPA project that already return Optional
. Further details on this release may be found in the changelog.
Open Liberty
IBM has released version 24.0.0.8-beta of Open Liberty that introduces versionless features to simplify and streamline choosing compatible features for the MicroProfile, Jakarta EE and Java EE platforms. This is accomplished by configuring only the features at the specific versions required by an application. This composable design pattern minimizes runtime resource requirements and accelerates application startup times.
This release also provides previews of the upcoming releases of MicroProfile 7.0, scheduled to be released on or about August 22, 2024, and Jakarta EE 11, scheduled to be released in 3Q2024.
Hibernate
The release of Hibernate ORM 6.6.0.Final, preceded by the second release candidate a day earlier, delivers a complete implementation of the new Jakarta Data 1.0 specification that is: based on compile-time code generation via an annotation processor, enabling compile-time type safety; and backed by the StatelessSession
interface, which has been enhanced especially to meet the needs of Jakarta Data. Other new features include: a new @ConcreteProxy
annotation to replace the deprecated @Proxy
and @LazyToOne
annotations; and a discriminator-based inheritance for types annotated with @Embeddable
.
The release of Hibernate Search 7.2.0.Final, preceded by the first release candidate two days earlier, provides improvements to the Search DSL that include: new projection types; new predicates; enhancements to the existing predicate types; query parameters; and a deprecation of the ValueConvert
enumeration in favor of the ValueModel
enumeration. This version upgrades to Hibernate ORM 6.6.0.Final which introduces compatibility with: OpenSearch 2.14, 2.15 and 2.16; and Elasticsearch 8.14 and 8.15.
The release of Hibernate Reactive 2.4.0.Final, also preceded by the first release candidate two days earlier, ships with notable changes such as: convert the cascadeOnLock()
method, defined in the DefautlReactiveLockEventListener
class, to be reactive; avoid the creation of multiple connections during schema migration; and a dependency upgrade to Hibernate ORM 6.6.0.Final. More details on this release may be found in the release notes.
Apache Software Foundation
Versions 11.0.0-M24, 10.1.28 and 9.0.93 of Apache Tomcat deliver bug fixes and notable changes such as: align HTTP/2 with HTTP/1.1 and recycle the container’s internal request and response processing objects by default that may be can be controlled via the new discardRequestsAndResponses
attribute on the HTTP/2 upgrade protocol; the addition of compatibility methods from JEP 454, Foreign Function & Memory API, that support OpenSSL, LibreSSL and BoringSSL, which all require a minimal version of JDK 22; and support for the RFC 8297, An HTTP Status Code for Indicating Hints, specification where applications can use this feature by casting the HttpServletResponse
interface to the Response
class and then calling the sendEarlyHints()
method. Further details on these releases may be found in the release notes for version 11.0.0-M24, version 10.1.28 and version 9.0.93.
Infinispan
Infinispan 15.0.7.Final, the seventh maintenance release, provides resolutions to notable issues such as: throw a more refined and descriptive exception if user properties are malformed; a NullPointerException
when attempting to remove an entry with Xsite; and the consistent return of an empty array from the IntermediateCacheStream
class due to a typo that did not update a copy-and-paste from the toArray()
method. More details on this release may be found in the release notes.
Gradle
The first release candidate of Gradle 8.10 delivers resolutions to numerous issues and notable changes: improvements to the configuration cache, including a significant reduction in the cache file size and accelerated cache loading times; and Improved behavior and callback execution for the GradleLifecycle
API. Further details on this release may be found in the release notes.
Article: Prepare to Be Unprepared: Investing in Capacity to Adapt to Surprises in Software-Reliant Businesses
MMS • John Allspaw
Article originally posted on InfoQ. Visit InfoQ
Key Takeaways
- Building and maintaining resilience requires intentionally creating conditions where engineers can share, discuss, and demonstrate their expertise among others.
- Engineering resilience means enhancing and expanding how people successfully handle surprising scenarios, so creating opportunities for people to share the “messy details” of their experience handling an incident is paramount.
- The primary challenge in resilience engineering is understanding what does not go wrong in order to expand what goes well. We notice incidents, but we tend not to notice when they do not happen.
- Investing time and attention in learning about the goals other groups have, and what constraints they typically face supports reciprocity where groups mutually assist each other when needed.
- When people make mistakes, their actions are looked at closely, but when people solve problems, their actions are rarely looked at in detail. Resilience Engineering emphasizes the critical importance of the latter over the former.
Typical approaches to improving outcomes in software-reliant businesses narrowly focus on reducing the incidents they experience. There is an implied (almost unspoken) assumption that underlies this approach: that incidents are extraordinary aberrations, unconnected to “normal” work. They’re often seen as irritating distractions For over twenty years, the field of Resilience Engineering has aimed at flipping this approach around – by understanding what makes incidents so rare (relative to when and how they do not happen) and so minor (relative to how much worse they can be) and deliberately enhancing what makes that possible.
This article touches on a few aspects of this perspective shift.
Being prepared to be unprepared
Resilience represents the activities, conditions, and investments in capabilities that support people to adapt to situations which cannot be anticipated or foreseen. This is not my definition; it’s one that the Resilience Engineering community has developed after over 20 years of studying adaptation in uncertain, surprising, complex, and ambiguous situations.
I realize this is a very wordy definition. Another way to describe resilience is to point to a fundamental of the concept: adaptive capacity. This term has a long history in the fields of biology, ecology, environmental science, and even climate change. The Resilience Engineering community recognized this concept as applicable at a human and organizational level.
Adaptive capacity can be defined as “the potential to adapt in the future following a disturbance (a change in conditions, information, goals, or difficulties) that threatens a plan-in-progress.” Note the emphasis on the potential to adapt – it’s not adaptation itself, but the investments made prior to needing to adapt.
Richard Cook and David Woods (pioneers of the field) have called this being “poised” to adapt. This refers to the expertise, resources, and conditions already in place which make it possible for people to respond to an incident.
Resilience is not reliability++
My colleague Dr. David Woods has written about how reliability is different from resilience:
The problem is that reliability makes the assumption that the future will be just like the past. That assumption doesn’t hold because there are two facts about this universe that are unavoidable: There are finite resources and things change.
In other words, reliability is better understood as a probability: it is the likelihood that one of many identical things will fail over a specified period of time. Reliability is derived by testing or observing populations of things that have known (and precise) ideal dimensions and behaviors. Making predictions using reliability data assumes (incorrectly) that the future will look just like the past.
Related is the concept of robustness, which is all the processes, mechanisms, or automations we put in place to handle known failure modes. It is in contrast to making predictions using reliability data because it anticipates specific future failures and puts in place countermeasures to either mitigate them or lessen their impact. We can build robustness around failure modes we can predict, but the world changes in ways that surprise us – in ways that defy prediction.
Resilience is about the already-present skills and capabilities people draw on when responding to surprises, and the related ability for the system (people and the technology they operate as a whole) to anticipate and adapt in response to the surprises that occur.
Resilience hides in plain sight
A well-known and contrarian adage in the Resilience Engineering community is that Murphy’s Law – “anything that can go wrong, will” – is wrong. What can go wrong almost never does, but we don’t tend to notice that.
People engaged in modern work (not just software engineers) are continually adapting what they’re doing, according to the context they find themselves in. They’re able to avoid problems in most everything they do, almost all of the time. When things do go “sideways” and an issue crops up they need to handle or rectify, they are able to adapt to these situations due to the expertise they have.
Research in decision-making described in the article Seeing the invisible: Perceptual-cognitive aspects of expertise by Klein, G. A., & Hoffman, R. R. (2020) reveals that while demonstrations of expertise play out in time-pressured and high-consequence events (like incident response), expertise comes from experience with facing varying situations involved with “ordinary” everyday work. It is “hidden” because the speed and ease with which experts do ordinary work contrasts with how sophisticated the work is. Woods and Hollnagel (Woods & Hollnagel, 2006) call this the Law of Fluency:
“Well”-adapted cognitive work occurs with a facility that belies the difficulty of the demands resolved and the dilemmas balanced.
In other words: as people gain expertise, becoming more familiar and comfortable with confronting uncertainty and surprise, they also become less able to recognize their own skill in handling such challenges. This is what brings more novice observers to remark that the experts “make it look easy.”
In addition to this phenomenon where ingredients of expertise become more “invisible” as it grows, there are many activities people engage in which support resilient performance that, to practitioners, are viewed as just “good practices.”
Peer code review is an example of such an activity. On the surface, we can view reviewing code written by peers as a way the author of a given code change can build confidence (via critique and suggestions from colleagues) that it will behave as intended. Looking a bit deeper, we can also see benefits for the reviewers as well: they have an opportunity to understand not only what new or changing functionality their peers are focused on, but also gain insight into how others understand the codebase, the language they’re writing with, specific techniques which may apply in their own projects, and a myriad of other sometimes subtle-but-real benefits. Seen in this way, the ordinary practice of peer code review can have a significant influence on how participants are able to adapt to surprising situations. Given the choice between an incident responder who doesn’t engage in code review and those who do, the latter has a clear advantage.
Adaptive capacity comes from amplifying expertise
In order to increase (and sustain) adaptive capacity, we need to first look closely at what makes it possible for people to respond to incidents as well as they do in your organization – what practices and conditions and resources they depend on. Incidents can always end up worse than they are. Look at what concrete things people do to keep them from getting as bad as they could have been. How do they recognize what was happening? How do they know what to do in this situation? Once you can identify these sources of adaptive capacity, you can a) enhance them and b) protect them from being eroded in the future.
Here are a few examples:
- New hire on-call shadowing. When a new teammate first takes on-call rotation responsibilities, they will shadow an experienced person for their first rotation. This provides an opportunity for the novice to understand what scenarios they may find themselves in and what a veteran does in those situations. It also gives the veteran a chance to explain and describe to the novice what, how, and why they are doing what they’re doing. This is a practice which can easily erode, especially under economic tightening: why pay two engineers to be on-call when it only “takes” one?
- Visibility across code repositories and logs. Many companies have a (sometimes implicit) policy of allowing all engineers access to all code repositories in use by the organization. This accessibility can help engineers explore and understand what mechanisms might be in play when trying to diagnose or rectify unexpected behavior. This is another example of a critical source of adaptive capacity, even though companies with this sort of policy don’t often recognize it as such; it’s just seen as ‘the way we do things.’ It’s also not too difficult to imagine this sort of all engineer/all repos access being removed or significantly reduced, in the name of compliance or security concerns.
Resilience is fueled by sharing adaptive capacity across the organization
To share adaptive capacity means first taking the initiative to understand:
- what makes work difficult for other “neighboring” units (teams, groups, etc.),
- what makes them good at handling surprises when they arise, and
- finding fluid ways to keep updated on how well they are handling surprises.
Only then can groups commit to mutually assisting each other when situations that need that neighboring support begin to emerge. This is called reciprocity.
How does this happen, practically? By investing time and attention in learning about the goals other groups have, and what constraints they typically face. One way this can happen in software teams is when engineers are encouraged to participate in voluntary and temporary “rotations” of working on other teams for even a short period of time.
The best and most concrete study on how adaptive capacity can be shared is “Building and revising adaptive capacity sharing for technical incident response: A case of resilience engineering” by Dr. Richard Cook and Beth Long in the journal Applied Ergonomics. Their study of an organization’s approach resulted in two main findings:
- The ability to borrow adaptive capacity from other units is a hallmark of resilient systems
- The deliberate adjustment of adaptive capacity sharing is a feature of some forms of resilience engineering.
An accessible version of this research paper, On adaptive capacity in incident response, describes a practitioner-led effort that created a new way of handling particularly difficult incidents. Under certain conditions, incident responders could “borrow” from a deep reserve of expertise to assist in effective and fluid ways. This reserve consisted of a small group of tenured volunteers with diverse incident experience. Bringing their expertise to bear in high-severity and difficult-to-resolve situations meant that incidents were handled more efficiently. Members of this volunteer support group noted that while the idea for this emerged from hands-on engineers, leaders at the company recognized its value and provided the support and organizational cover so that it could expand and evolve.
Success, in any form, tends to produce growth in things such as new markets and customers, use cases, functionality, partnerships, and integrations. This growth comes with increased complexity in technical systems, organizational boundaries, stakeholders, etc. This additional complexity is what produces new novel and unexpected behaviors which require adaptation in order for the organization to remain viable. In other words: success means growth, growth means complexity, and complexity means surprise.
Sharing adaptive capacity is what creates conditions which enable an organization’s success to continue.
Investments in the sharing of adaptive capacity pay off in the ability to sustain success and keep brittle collapse of the organization’s work at arm’s length.
Building skills in incident response is building expertise (not just experience)
The best people can do when it comes to responding to an incident is to a) recognize immediately what is happening, and b) know immediately what to do about it. Anything that can bolster people’s expertise in support of those two things is paramount. Everything else that is commonly thought of as a skill is secondary and often misses the forest for the trees.
For example: it’s not difficult to find guidance akin to “clear communication to stakeholders.” On the surface, this is quite reasonable advice. When it’s unclear or ambiguous as to what is happening (which often happens at the beginning of an incident) reporting “we don’t know what is happening and therefore we’re unsure how long it will take to resolve” isn’t something non-responding stakeholders typically view as “clear” communication. Yet, it’s the reality in many cases. Note also that efforts to communicate “what” is happening and “when” it’ll be resolved takes away from the attention being spent on resolving the issue. This is an unsolvable dilemma. Dr. Laura Maguire’s PhD dissertation work explored this very dilemma and the phenomena that surround it, and she wrote a piece summarizing her findings, Exploring Costs of Coordination During Outages with Laura Maguire at QCon London.
So, what activities are productive in building skills to respond effectively to incidents? Understand the “messy details” of incidents that have already happened, from the perspective of the people who actually responded to them! What made it difficult or confusing? What did they actually do when they didn’t know what was happening? What did they know (that others didn’t) which made a difference in how they resolved the problem? These are always productive directions to take.
Finding existing sources of resilience
Look at incidents (otherwise known as “surprises which challenge plans”) you’re experiencing and look for what made handling those cases possible.
What did people look at? Telemetry? Logs? How did they know what to look for? How did they know where to look? Did they go looking in parts of code repositories or logs that others are responsible for? If they called others for help: how did they know who to call, or how to actually contact them?
Many of these activities require first having access to the data they relied on; they had this access. This may seem so ordinary to be dismissed as important, but it’s often not difficult to come up with reasons why that access might be taken away in the future.
People make use of whatever they think they need to when faced with solving a problem under time pressure with consequences for getting things wrong. More often than not, what people actually do isn’t given a good deal of attention. Unless, of course, their actions are seen as triggering a problem in the first place, or making an existing issue worse.
In other words: when people make mistakes, their actions are looked at closely. When people solve problems, their actions are rarely looked at in detail (if at all).
Resilience Engineering emphasizes the critical importance of the latter over the former.
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Andrew Davidson, Senior Vice President, Product Management, of MongoDB, building and managing your data in the cloud.
Although I’ve spent most of my career in New York tech, I spent a year living in Bangladesh, helping advise on energy infrastructure. To radically summarize that period of my life, living in Bangladesh gave me an intimate view of one of the fastest-growing economies in Asia. Indeed, between 2011 and 2013—bookending my time there—Bangladesh’s population grew about 2.3%, and its GDP grew 16.6% (versus 1.5% and 7.9%, respectively, in the U.S.).
However, that sort of growth isn’t without its costs. In 2011, air pollution was a leading health risk in Bangladesh—and it still is today. Bangladesh might be the most exciting, vibrant place I’ve ever lived, but a rapid expansion of industry, combined with Bangladesh’s dense population, has taken (and continues to take) a toll on the people there—which brings me to the Industrial Revolution.
For much of human history, populations and economies grew—or shrank—more or less in tandem. But since the Industrial Revolution in the mid-1700s, both the global population and the world economy have exploded at an exponential and unprecedented rate. For example, between 1700 and 2000, the world population grew more than 950%. Likewise, global GDP grew more than 18,000% between 1700 and 2022.
But this taxed the planet. Rich economies got richer (in part) by burning the fossil fuels that powered industrial production. Conversely, countries without access to—or power over—natural resources didn’t grow or were repressed by richer countries. If the economy was growing, so were emissions, and if emissions fell, so did the economy.
A Shifting Order
Today, that relationship is changing. As we’ve moved from the industrial to the digital revolution, we’ve seen for the first time in history a decoupling of economic growth and emissions. In the U.S., for example, “GDP has doubled since 1990, but CO2 emissions have returned to the level back then.”
Broadly speaking, new technology—especially the internet—has lessened our dependence on natural resources and physical commodities. For example, instead of commuting to an office every day, my work depends on apps—some of which, like Adobe, are powered by MongoDB, where I’ve worked since 2013. I see huge promise in this tech-powered work model, which may cut people’s emissions by up to 29%.
Today, growth is increasingly less dependent on the construction of physical infrastructure or the exfiltration of fossil fuels. Since the 1990 benchmark, “there has been a 36% decline in the amount of energy needed to generate a unit of global GDP,” and the internet now has “a greater weight in GDP than agriculture or utilities.”
To borrow from Marc Andreesen, technology is eating traditional industry, and we’re discovering new avenues for progress and growth that are lessening our reliance on the planet’s resources.
AI To The Rescue(?)
AI is the hot topic du jour, and I’m particularly excited about the enormous potential it offers to accelerate innovation while mitigating the environmental impact of progress. I recognize that this might be an unpopular take, given stories about AI’s energy needs. According to a recent report by Goldman Sachs, for example, AI could double data center electricity consumption by the end of the decade, from 1% to 2% of global power to 3% to 4%.
Although this increase is indeed large, AI’s enormous potential—for example, possible GDP growth of 6.1% in the U.S. over the next decade—could more than offset its energy cost. What’s more, even projected data center electricity consumption (about 1,000 TWh by 2026, according to the IEA) would still be a fraction of other sectors, like industrial energy use, which accounted for about 10,000 TWh in 2020 or about 33% of global electricity.
Finally, Boston Consulting Group estimates that AI could help reduce overall emissions by 2.6 to 5.3 gigatons. Indeed, organizations worldwide are already using AI to combat climate change in a variety of ways.
For example, Google DeepMind has turned to AI to optimize industrial cooling and to increase the value of wind farm energy output. Meanwhile, MongoDB customers like AgroScout and Mortar.io are using AI to make agriculture and building more sustainable, respectively. In Brazil, Sipremo uses AI to better predict climate disasters, while London-based Greyparrot “has developed an AI system that analyzes waste processing and recycling facilities to help them recover and recycle more waste material.”
The need for solutions like these is urgent, as the climate crisis is exacerbating a number of already pressing challenges, like the burden of mosquito-borne disease. And climate change is disproportionately impacting much of the Global South, which has been dealing with a range of environmental and health concerns not necessarily driven by climate change (like unhealthy concentrations of ambient particulate matter and household air pollution).
But the road to recovery for developing regions is a challenging one. The barriers to climate innovation are high, and overcoming the debts (e.g., technical, infrastructure and financial) that developing countries are struggling under is a tall order. For example, a report from the ONE Campaign found that “more than one in five emerging markets and developing countries paid more to service their debt in 2022 than they received in external financing.”
I’m hopeful that, here too, AI will be a solution. In a world where AI becomes everyone’s intelligent companion, everyone becomes a developer, an innovator and a disruptor. Coding companions and tools that bring specialized knowledge to everyone’s fingertips will empower them to build modern technologies that transform industries for the better.
At the same time, AI will make it more cost-effective to innovate, removing the barriers that stand in the way of modernization in lower-income countries like Bangladesh. In those places, the promise AI holds—and that technology as a whole holds—is truly dramatic.
So, yes, you could say I’m optimistic about AI’s promise. After all, techno-optimism is in my bones: Before moving to Bangladesh and New York, I grew up and began my career in Silicon Valley. And although there’s much work to be done, the idea that technology holds this promise gives me hope. I can’t wait to see where it takes us.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Article originally posted on mongodb google news. Visit mongodb google news
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Cwm LLC lifted its holdings in MongoDB, Inc. (NASDAQ:MDB – Free Report) by 7.6% during the second quarter, according to the company in its most recent Form 13F filing with the Securities and Exchange Commission (SEC). The institutional investor owned 3,113 shares of the company’s stock after buying an additional 219 shares during the quarter. Cwm LLC’s holdings in MongoDB were worth $778,000 as of its most recent SEC filing.
Other institutional investors have also recently modified their holdings of the company. Transcendent Capital Group LLC acquired a new position in MongoDB during the fourth quarter worth $25,000. YHB Investment Advisors Inc. acquired a new position in MongoDB during the first quarter worth $41,000. GAMMA Investing LLC acquired a new position in MongoDB during the fourth quarter worth $50,000. Sunbelt Securities Inc. raised its holdings in MongoDB by 155.1% during the first quarter. Sunbelt Securities Inc. now owns 125 shares of the company’s stock worth $45,000 after purchasing an additional 76 shares during the last quarter. Finally, DSM Capital Partners LLC acquired a new position in MongoDB during the fourth quarter worth $61,000. 89.29% of the stock is currently owned by hedge funds and other institutional investors.
MongoDB Price Performance
MDB stock opened at $234.90 on Monday. The stock has a 50-day moving average price of $241.17 and a two-hundred day moving average price of $334.58. MongoDB, Inc. has a 12-month low of $212.74 and a 12-month high of $509.62. The company has a current ratio of 4.93, a quick ratio of 4.93 and a debt-to-equity ratio of 0.90. The company has a market cap of $17.23 billion, a PE ratio of -83.59 and a beta of 1.13.
MongoDB (NASDAQ:MDB – Get Free Report) last posted its quarterly earnings results on Thursday, May 30th. The company reported ($0.80) earnings per share (EPS) for the quarter, meeting analysts’ consensus estimates of ($0.80). MongoDB had a negative return on equity of 14.88% and a negative net margin of 11.50%. The firm had revenue of $450.56 million during the quarter, compared to the consensus estimate of $438.44 million. As a group, research analysts predict that MongoDB, Inc. will post -2.67 EPS for the current year.
Insider Transactions at MongoDB
In other news, Director Dwight A. Merriman sold 2,000 shares of the company’s stock in a transaction dated Tuesday, June 4th. The stock was sold at an average price of $234.24, for a total value of $468,480.00. Following the completion of the sale, the director now owns 1,146,784 shares of the company’s stock, valued at $268,622,684.16. The sale was disclosed in a document filed with the Securities & Exchange Commission, which is available through this link. In related news, CAO Thomas Bull sold 138 shares of the business’s stock in a transaction dated Tuesday, July 2nd. The stock was sold at an average price of $265.29, for a total value of $36,610.02. Following the transaction, the chief accounting officer now owns 17,222 shares in the company, valued at $4,568,824.38. The transaction was disclosed in a legal filing with the SEC, which is available at this hyperlink. Also, Director Dwight A. Merriman sold 2,000 shares of the business’s stock in a transaction dated Tuesday, June 4th. The stock was sold at an average price of $234.24, for a total value of $468,480.00. Following the completion of the transaction, the director now owns 1,146,784 shares in the company, valued at approximately $268,622,684.16. The disclosure for this sale can be found here. Insiders sold 30,179 shares of company stock valued at $7,368,989 over the last quarter. Company insiders own 3.60% of the company’s stock.
Analysts Set New Price Targets
Several equities analysts recently issued reports on the company. Morgan Stanley reduced their price objective on MongoDB from $455.00 to $320.00 and set an “overweight” rating for the company in a report on Friday, May 31st. Bank of America cut their price target on MongoDB from $500.00 to $470.00 and set a “buy” rating for the company in a report on Friday, May 17th. Guggenheim raised MongoDB from a “sell” rating to a “neutral” rating in a report on Monday, June 3rd. Truist Financial cut their price target on MongoDB from $475.00 to $300.00 and set a “buy” rating for the company in a report on Friday, May 31st. Finally, Canaccord Genuity Group cut their price target on MongoDB from $435.00 to $325.00 and set a “buy” rating for the company in a report on Friday, May 31st. One research analyst has rated the stock with a sell rating, five have assigned a hold rating, nineteen have assigned a buy rating and one has issued a strong buy rating to the company. According to MarketBeat, the stock currently has a consensus rating of “Moderate Buy” and a consensus price target of $355.74.
View Our Latest Stock Report on MongoDB
About MongoDB
MongoDB, Inc, together with its subsidiaries, provides general purpose database platform worldwide. The company provides MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premises, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
Further Reading
Want to see what other hedge funds are holding MDB? Visit HoldingsChannel.com to get the latest 13F filings and insider trades for MongoDB, Inc. (NASDAQ:MDB – Free Report).
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.
Article originally posted on mongodb google news. Visit mongodb google news
2024-08-12 | MongoDB, Inc. Sued for Securities Law Violations – MDB | Press Release – Stockhouse
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
NEW YORK, Aug. 12, 2024 /PRNewswire/ — The Gross Law Firm issues the following notice to shareholders of MongoDB, Inc. (NASDAQ: MDB).
Shareholders who purchased shares of MDB during the class period listed are encouraged to contact the firm regarding possible lead plaintiff appointment. Appointment as lead plaintiff is not required to partake in any recovery.
CONTACT US HERE:
https://securitiesclasslaw.com/securities/mongodb-inc-loss-submission-form/?id=94752&from=4
CLASS PERIOD: August 31, 2023 to May 30, 2024
ALLEGATIONS: According to the complaint, on March 7, 2024, MongoDB reported strong Q4 2024 results and then announced lower than expected full-year guidance for 2025. MongoDB attributed it to the Company’s change in its “sales incentive structure” which led to a decrease in revenue related to “unused commitments and multi-year licensing deals.” Following this news, MongoDB’s stock price fell by $28.59 per share to close at $383.42 per share. Later, on May 30, 2024, MongoDB further lowered its guidance for the full year 2025 attributing it to “macro impacting consumption growth.” Analysts commenting on the reduced guidance questioned if changes made to the Company’s marketing strategy “led to change in customer behavior and usage patterns.” Following this news, MongoDB’s stock price fell by $73.94 per share to close at $236.06 per share.
DEADLINE: September 9, 2024 Shareholders should not delay in registering for this class action. Register your information here: https://securitiesclasslaw.com/securities/mongodb-inc-loss-submission-form/?id=94752&from=4
NEXT STEPS FOR SHAREHOLDERS: Once you register as a shareholder who purchased shares of MDB during the timeframe listed above, you will be enrolled in a portfolio monitoring software to provide you with status updates throughout the lifecycle of the case. The deadline to seek to be a lead plaintiff is September 9, 2024. There is no cost or obligation to you to participate in this case.
WHY GROSS LAW FIRM? The Gross Law Firm is a nationally recognized class action law firm, and our mission is to protect the rights of all investors who have suffered as a result of deceit, fraud, and illegal business practices. The Gross Law Firm is committed to ensuring that companies adhere to responsible business practices and engage in good corporate citizenship. The firm seeks recovery on behalf of investors who incurred losses when false and/or misleading statements or the omission of material information by a company lead to artificial inflation of the company’s stock. Attorney advertising. Prior results do not guarantee similar outcomes.
CONTACT:
The Gross Law Firm
15 West 38th Street, 12th floor
New York, NY, 10018
Email: dg@securitiesclasslaw.com
Phone: (646) 453-8903
View original content to download multimedia:https://www.prnewswire.com/news-releases/mongodb-inc-sued-for-securities-law-violations–investors-should-contact-the-gross-law-firm-for-more-information–mdb-302218937.html
SOURCE The Gross Law Firm
Article originally posted on mongodb google news. Visit mongodb google news
MongoDB, Inc. Sued for Securities Law Violations – Investors Should Contact The Gross Law …
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
NEW YORK, Aug. 12, 2024 /PRNewswire/ — The Gross Law Firm issues the following notice to shareholders of MongoDB, Inc. (NASDAQ: MDB).
Shareholders who purchased shares of MDB during the class period listed are encouraged to contact the firm regarding possible lead plaintiff appointment. Appointment as lead plaintiff is not required to partake in any recovery.
CONTACT US HERE:
https://securitiesclasslaw.com/securities/mongodb-inc-loss-submission-form/?id=94752&from=4
CLASS PERIOD: August 31, 2023 to May 30, 2024
ALLEGATIONS: According to the complaint, on March 7, 2024, MongoDB reported strong Q4 2024 results and then announced lower than expected full-year guidance for 2025. MongoDB attributed it to the Company’s change in its “sales incentive structure” which led to a decrease in revenue related to “unused commitments and multi-year licensing deals.” Following this news, MongoDB’s stock price fell by $28.59 per share to close at $383.42 per share. Later, on May 30, 2024, MongoDB further lowered its guidance for the full year 2025 attributing it to “macro impacting consumption growth.” Analysts commenting on the reduced guidance questioned if changes made to the Company’s marketing strategy “led to change in customer behavior and usage patterns.” Following this news, MongoDB’s stock price fell by $73.94 per share to close at $236.06 per share.
DEADLINE: September 9, 2024 Shareholders should not delay in registering for this class action. Register your information here: https://securitiesclasslaw.com/securities/mongodb-inc-loss-submission-form/?id=94752&from=4
NEXT STEPS FOR SHAREHOLDERS: Once you register as a shareholder who purchased shares of MDB during the timeframe listed above, you will be enrolled in a portfolio monitoring software to provide you with status updates throughout the lifecycle of the case. The deadline to seek to be a lead plaintiff is September 9, 2024. There is no cost or obligation to you to participate in this case.
WHY GROSS LAW FIRM? The Gross Law Firm is a nationally recognized class action law firm, and our mission is to protect the rights of all investors who have suffered as a result of deceit, fraud, and illegal business practices. The Gross Law Firm is committed to ensuring that companies adhere to responsible business practices and engage in good corporate citizenship. The firm seeks recovery on behalf of investors who incurred losses when false and/or misleading statements or the omission of material information by a company lead to artificial inflation of the company’s stock. Attorney advertising. Prior results do not guarantee similar outcomes.
CONTACT:
The Gross Law Firm
15 West 38th Street, 12th floor
New York, NY, 10018
Email: dg@securitiesclasslaw.com
Phone: (646) 453-8903
View original content to download multimedia:https://www.prnewswire.com/news-releases/mongodb-inc-sued-for-securities-law-violations–investors-should-contact-the-gross-law-firm-for-more-information–mdb-302218937.html
SOURCE The Gross Law Firm
Article originally posted on mongodb google news. Visit mongodb google news
Contact The Rosen Law Firm Before September 9, 2024 to Discuss Your Rights – MDB – Le Lézard
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Subject: ATY
NEW YORK, Aug. 11, 2024 /PRNewswire/ —
Why: Rosen Law Firm, a global investor rights law firm, reminds purchasers of securities of MongoDB, Inc. (NASDAQ: MDB) between August 31, 2023 and May 30, 2024, both dates inclusive (the “Class Period”), of the important September 9, 2024 lead plaintiff deadline.
So What: If you purchased MongoDB securities during the Class Period you may be entitled to compensation without payment of any out of pocket fees or costs through a contingency fee arrangement.
What to do Next: To join the MongoDB class action, go to https://rosenlegal.com/submit-form/?case_id=27182 or call Phillip Kim, Esq. toll-free at 866-767-3653 or email [email protected] for information on the class action. A class action lawsuit has already been filed. If you wish to serve as lead plaintiff, you must move the Court no later than September 9, 2024. A lead plaintiff is a representative party acting on behalf of other class members in directing the litigation.
Details of the case: According to the lawsuit, throughout the Class Period, defendants created the false impression that they possessed reliable information pertaining to the Company’s projected revenue outlook and anticipated growth while also minimizing risk from seasonality and macroeconomic fluctuations. In truth, MongoDB’s sales force restructure, which prioritized reducing friction in the enrollment process, had resulted in complete loss of upfront commitments; a significant reduction in the information gathered by their sales force as to the trajectory for the new MongoDB Atlas enrollments; and reduced pressure on new enrollments to grow. Defendants misled investors by providing the public with materially flawed statements of confidence and growth projections which did not account for these variables. When the true details entered the market, the lawsuit claims that investors suffered damages.
To join the MongoDB class action, go to https://rosenlegal.com/submit-form/?case_id=27182 or call Phillip Kim, Esq. toll-free at 866-767-3653 or email [email protected] for information on the class action.
No Class Has Been Certified. Until a class is certified, you are not represented by counsel unless you retain one. You may select counsel of your choice. You may also remain an absent class member and do nothing at this point. An investor’s ability to share in any potential future recovery is not dependent upon serving as lead plaintiff.
Follow us for updates on LinkedIn: https://www.linkedin.com/company/the-rosen-law-firm, on Twitter: https://twitter.com/rosen_firm or on Facebook: https://www.facebook.com/rosenlawfirm/.
Attorney Advertising. Prior results do not guarantee a similar outcome.
Contact Information:
Laurence Rosen, Esq.
Phillip Kim, Esq.
The Rosen Law Firm, P.A.
275 Madison Avenue, 40th Floor
New York, NY 10016
Tel: (212) 686-1060
Toll Free: (866) 767-3653
Fax: (212) 202-3827
[email protected]
www.rosenlegal.com
SOURCE THE ROSEN LAW FIRM, P. A.
News published on 11 august 2024 at 14:45 and distributed by:
Article originally posted on mongodb google news. Visit mongodb google news
MMS • Ben Evans
Article originally posted on InfoQ. Visit InfoQ
The JSpecify collective has made their first release, JSpecify 1.0.0. The group’s mission is to define common sets of annotation types for use in JVM languages, to improve static analysis and language interoperation.
The projects and groups in the consensus include:
- OpenJDK
- EISOP
- PMD
- Android, Error Prone, Guava (Google)
- Kotlin, IntelliJ (JetBrains)
- Azure SDK (Microsoft)
- Sonar
- Spring
By participating in JSpecify, the members guarantee their projects will be backwards compatibility at source level. This means that they will not rename the annotations in the jar, move them or make other changes that would cause code compilation to fail due to an update. In other words, it is safe to depend on it, and it won’t be changed in any way that breaks application code.
This initial release focuses on the use of type-use annotations to indicate the nullability status of usages of static types. The primary use cases are expected to be variables, parameters and return values, but they can be used anywhere that tracking nullness makes conceptual sense.
Some possibilities do not pass the “conceptual sense” test however – e.g. you still can’t write things like class Foo extends @Nullable Bar
.
Background on nullability and nullness checkers
The subject of nullability has long been a topic of debate in the Java ecosystem. The idea is that developers should be better insulated from NPEs, as large swathes of use cases that currently could cause NPEs will be ruled out at build time.
However, this promising idea has been held up by some unfortunate facts, specifically that Java has both:
- A long history which largely predates viable concepts of non-nullable values in programming languages.
- A strong tradition of backwards compatability.
It has therefore been difficult to see a path forward that allows null-awareness to be added to the language in a consistent and non-breaking fashion.
Nevertheless, attempts have been made — the most famous is probably JSR 305 which ran for several years, but eventually became Dormant in 2012 without being completed and fully standardized.
After the failure of that effort, various companies and projects pressed ahead with their own versions of nullability annotations. Some of these projects were trying to solve primarily for their own use cases, but others are intended to be broadly usable by Java applications.
Of particular interest is the Kotlin language, which has built null-awareness into the type system. This is achieved by assuming that, for example, String
indicates a value that cannot be null
, and requiring the developer to explicitly indicate nullability as String?
. This is mostly very successful, at least as long as a program is written in pure Kotlin.
Java developers sometimes exclaim that they “just want what Kotlin has”, but as the Kotlin docs make clear there are a number of subtleties here, especially when interoperating with Java code.
Currently, in the pure Java space, most current projects approach nullability by treating the null status as extra information to be processed via a build-time checker (static analysis) tool. By adding additional information in the form of annotations, then the checker aims to reduce or eliminate NPEs from the program the analysis was run on.
Different projects have different philosophies about how much of a “soundness guarantee” they want to offer. Soundness comes at a cost – more complexity, more effort to migrate your codebase, more false positives to deal with, so not every developer will want full and complete guarantees.
JSpecify represents an attempt to unify these approaches, and the initial goals are quite modest – version 1.0.0 defines only four annotations:
@Nullable
@NonNull
@NullMarked
@NullUnmarked
The first two can be applied to specific type usage (e.g. parameter definition or method return values), while the second pair can be broadly applied. Together, they provide usable declarative, use-site null-awareness for Java types.
One general common use case is to add @NullMarked
to module, package or type declarations that you’re adding nullability information to. This can save a lot of work, as it indicates that any remaining unannotated type usages are not nullable. Then, any exceptional cases can be explicitly marked as @Nullable
– and this arrangement is the sensible choice for a default.
The intent is for end users to adopt (or continue to use) one of the participating projects – and different projects have different levels of maturity. For example, IntelliJ supports JSpecify annotations, but generics are still not fully supported yet.
A reference checker is also available, but this is not really intended for use by application developers, but instead is aimed at the projects participating in JSpecify.
There is also a FAQ covering the deeper, more technical aspects of the design.
InfoQ contacted one of the key developers working on JSpecify – Juergen Hoeller (Spring Framework project lead).
InfoQ: Please tell us about your involvement in JSpecify and nullability:
Hoeller: Together with Sebastien Deleuze, I’ve been working on the nullability design in Spring Framework 5 which led to our participation in JSpecify.
InfoQ: Can you explain how JSpecify is used in your projects and why it’s important to them?
Hoeller: The Spring Framework project and many parts of the wider Spring portfolio have adopted a non-null-by-default design in 2017, motivated by our Kotlin integration but also providing significant benefits in Java-based projects (in combination with IntelliJ IDEA and tools like NullAway).
This is exposed in our public API for application-level callers to reason about the nullability of parameters and return values. It also goes deep into our internal codebase, verifying that our nullability assumptions and assertions match the actual code flow. This is hugely beneficial for our own internal design and maintenance as well.
The original JSR 305 annotations were embraced quite widely in tools, despite never officially having been finished. Spring’s own annotations use JSR 305 as meta-annotations for tool discoverability, leading to immediate support in IntelliJ IDEA and Kotlin.
Our involvement in JSpecify started from that existing arrangement of ours since we were never happy with the state of the JSR 305 annotations and wanted to have a strategic path for annotation semantics and tool support going forward, allowing us to evolve our APIs and codebase with tight null safety.
InfoQ: It’s taken 6 years to get to 1.0.0. Why do you think that it’s taken so long?
Hoeller: While our current use cases in Spring are pretty straightforward, focusing on parameter and return values as well as field values, the problem space itself is complex overall. The JSpecify mission aimed for generics support which turned out to be rather involved. JSpecify 1.0 considers many requirements from many stakeholders, and it is not easy to reach consensus with such a wide-ranging collaborative effort.
InfoQ: What does the future look like, if JSpecify is successful?
Hoeller: For a start, JSpecify is well-positioned to supersede JSR 305 for tool support and for common annotations in open source frameworks and libraries. It is critical to have such support on top of the currently common baselines such as JDK 17 since large parts of the open source ecosystem will remain on such a baseline for many years to come. Even on newer Java and Kotlin versions over the next few years, JDK 17 based frameworks and libraries will still be in widespread use, and JSpecify can evolve along with their needs.
InfoQ: What’s next for JSpecify to tackle?
Hoeller: From the Spring side, we need official meta-annotation support so that we can declare our own nullability annotations with JSpecify meta-annotations instead of JSR 305 meta-annotations.
InfoQ: Thank you!