Month: December 2023
Postgres pioneer Michael Stonebraker promises to upend the database once more – The Register
MMS • RSS
Posted on nosqlgooglealerts. Visit nosqlgooglealerts
Interview What if we built the operating system on top of the database instead of the other way around? It sounds like an idea from an undergraduate student after one microdose too many, except it’s not. It’s a serious idea from someone who has already upended the computing industry and whose influence has spread into familiar products from Microsoft and Oracle.
Celebrating his 80th birthday this year, Michael Stonebraker continues with his work in database research, but his mark on the industry has been cemented with PostgreSQL, the open source relational database system which, for the first time, became the most popular choice of database among developers this year, according to the 2023 Stack Overflow survey. As well as a popular open source DBMS, vendors including the cloud hyperscalers, CockroachDB and YugabyteDB all offer database services with a PostgreSQL compatible front end.
Stonebraker’s first influential work started with Ingres, the early relational database system, which began as his research topic following his appointment as an assistant professor at UC Berkeley in 1971.
Speaking to The Register, he says: “My PhD thesis was on an aspect of Markov chains, and that, I realized, had no practical value whatsoever. I went to Berkeley, and you’ve got five years to make a contribution and get tenure. I knew it was not going to be my thesis topic. Then Eugene Wong, who was another faculty member at Berkeley, said, ‘Why don’t we look at databases?'”
The two read a then-recent proposal about relational databases from IBM researcher Edgar Codd called “A Relational Model of Data for Large Shared Data Banks.”
Stonebraker and Wong thought the Englishman’s idea was elegant and simple. “The obvious question was to try and build a relational database system. Both Eugene and I had no experience building system software but, like academics, we thought, let’s try it and see what happens. So, based on no experience, we set out to build Ingres. And that was what got me my tenure.”
Ingres had competition. IBM’s System R was the first to demonstrate the relational approach could provide working transactional performance and the first to implement the now ubiquitous SQL. Oracle began its relational system later in the 1970s. Ingres also had to face a platform problem.
“We got lots of people visiting Berkeley and asking us who’s the biggest user of Ingres. Then Arizona State University wanted to use it for a records database of 35,000 students but they couldn’t get over the fact they had to get an unsupported operating system from these guys at Bell Labs, namely Unix,” he says.
Ingres’s targeting of mid-range systems, into which Unix had newly emerged, also meant it did not support COBOL, the dominant language for business computing at the time.
“The only solution was to start a company,” Stonebraker says.
He went on to found Relational Technology to commercialize Ingres. It was later renamed Ingres Corporation and then bought by ASK Corporation in 1990, which in turn was bought by Computer Associates in 1994. Another Berkeley Ingres team member, Robert Epstein, went on to found Sybase, which for a decade was second to Oracle in the relational database market. In 1992, its product line was licensed to Microsoft, who used it for early versions of SQL Server.
But Stonebraker acknowledges the commercial codebase for Ingres was way ahead of the open source research project — other researchers could get the code for a nominal fee covering the tape required to store and the postal costs — so his team decided to push the code over a cliff and start all over again. What comes after Ingres? Postgres, obviously.
A new era
In 1986, a 28-page paper [PDF] — co-written with Larry Rowe — announced the design for Postgres, as it was then known, setting out six guiding ambitions. Among them were two that would prove pertinent to the database system’s longevity. One was to provide better support for complex objects. The second was to provide user extendibility for data types, operators and access methods.
Stonebraker tells us he knew from conversations with Ingres customers that being extendible would be important for a successful database in the future. “Once this customer called me and said, ‘You implemented time all wrong’,” he said.
The Berkeley professor was baffled because his team had gone to some length to ensure they implemented the Julian calendar correctly, leap years and all. But some financial bonds are paid in 12 equal months in a 360-day year, which you cannot implement in Ingres but you can in PostgreSQL, he says.
The motivation to make the database extendable also came from wanting to support new data types. An early project with Ingres tried to use it as a geographic information system, far from its home turf of business data. It was “arbitrarily slow and unfixable,” Stonebraker says.
The vision has paid off over the last decade. Ten years ago, PostgreSQL added support for Json documents, the file format around which NoSQL database MongoDB and Couchbase are based.
Stonebraker has been on record criticizing the NoSQL movement in the past. He tells The Register it was converging with relational databases because they were adopting SQL or SQL-like languages and they were accepting the need for consistency.
“NoSQL’s biggest good idea was the out of box experience, because with SQL databases, you have to construct the database, and then you have to define the cursor. They are hard to use. That’s one of the very valid criticisms made against SQL databases: the out-of-the-box experience sucks. You should be able to just turn it on and say, ‘Here’s some data’.”
The various services available to provide PostgreSQL and PostgreSQL compatible databases go some way to address that, but the emergence of the DBMS as popular open source system was a happy accident, and one Stonebraker had little to do with.
Although the research code for the database was — and remains – open source, building a database company around it was, at the time, impossible, as Stonebraker discovered when founding Illustra in 1992. “When we got venture capital funding for both Ingres and Postgres, VCs would have nothing to do with open source, that was a later phenomenon,” he says.
In 2005, Stonebraker founded Vertica based on a shared-nothing column-oriented DBMS for data warehousing, which he now says “would have benefited immensely by being open source but viability of open source code and VC community is a relatively recent phenomenon.”
‘Closed source databases are not the wave of the future’
Illustra was successful for a period. It was eventually sold to Informix for around $400 million in 1996, with Stonebraker’s share worth $6.5 million, Forbes wrote in 1997. Stonebraker became CTO of the parent company for four years.
It’s a comfortable sum, but chicken feed compared to Larry Ellison’s estimated net worth of $145 billion. Needless to say, Stonebraker is disparaging about Oracle, another early adopter of the relational model. “Ingres was always technically better and Postgres was practically better. It’s more flexible, and it’s open source. And these days, PostgreSQL is generally comparable in performance. In the general, closed source databases are not the wave of the future and I think Oracle is highly priced and not very flexible,” Stonebraker says.
Nonetheless it was Oracle that made a decision which provided a boost to open source PostgreSQL. It bought open source MySQL, which some of the community did not trust in the hands of the proprietary software giant. At the same time Illustra and other companies commercialized Postgres, Berkeley released the code for POSTGRES under the MIT license, allowing other developers to work on it.
In 1994, Andrew Yu and Jolly Chen, both Berkeley graduates, replaced query language POSTQUEL with SQL. The resulting Postgres95 was made freely available and modifiable under a more permissive license and renamed PostgreSQL.
“What ended up happening was Illustra kind of gaining traction, but the big kicker was when this group of totally unrelated people I didn’t even know, picked up the open source Postgres code, which was still around, and ran with it, totally unbeknownst to me. That was a wonderful accident,” he says.
“When MySQL was bought by Oracle, developers got suspicious in droves, and defected to PostgreSQL. It was another happy accident. It’s commercial success is wonderful, but it was largely serendipitous,” Stonebraker adds.
Meanwhile, database services have grown up around PostgreSQL. It has become the most dominant front end for compatible, or nearly compatible systems available from Google (AlloyDB and CloudSQL), Microsoft (Azure PostgreSQL), AWS (Aurora and RDS), CockcroachDB, YugabyteDB, EDB, and Avien.
“The whole world is moving to the cloud and Google, Amazon and Microsoft, are all betting the ranch on PostgreSQL compatibility. I think that’s a great idea. CockroachDB is wire-compatible with PostgreSQL. You can take a PostgreSQL application, and drop it on CockroachDB. PostgreSQL doesn’t have any distributed database capabilities but both YugabyteDB and CockroachDB do,” he says.
Stonebraker’s influence even reaches into the portfolio of rival Oracle. His federated database Mariposa became the basis for Cohera, a database company PeopleSoft bought in 2001, before becoming part of Oracle in 2004. In 2014, Stonebraker was recognized for the influence of his work on Ingres and Posgres with the Turing Prize, netting $1 million from Google in the process.
Despite many of his ideas being so widely used in the database industry, which Gartner said was worth $91 billion in 2022, Stonebraker is laid back about other people using his ideas.
“I’ve done well financially. I knew Ted Codd, who was very magnanimous about saying you guys should all run the [ideas]. You want to change the world; any particular person is only part of that. I’ve always done open source code and shared code with anybody who wanted it. In the process, I’ve done well financially so yeah, I have no regrets at all,” he says.
But that’s not to say he is ready to retire. In his latest project, Stonebraker is ready to change the world again.
The idea for DBOS, a Database-Oriented Operating System, came from a conversation with Matei Zaharia, the author of Apache Spark who is also co-founder of analytics and ML company Databricks and associate professor at Berkeley.
“Spark and Databricks are in the business of managing Spark instances on the cloud. He said at any given moment, Databricks is often managing a million-ish Spark-sub tasks for various users. They couldn’t do that using traditional operating system scheduling techniques: they needed something that could scale. The obvious answer was to put all scheduling information into a database. That’s exactly what the Databricks guys did: they put it all in a PostgreSQL database, and then started whining about Postgres performance,” says Stonebraker.
Never one to shirk a challenge, Stonebraker thought, “Well, I can do better than that.”
The new project replaced Linux and Kubernetes with a new operating system stack at the bottom of which is a database system, the prototype multi-node multi-core, transactional, highly-available VoltDB, which Stonebraker started.
“Basically, the operating system is an application to the database, rather than the other way around,” he says.
A paper Stonebraker co-authored with Zaharia and others explains: “All operating system state should be represented uniformly as database tables, and operations on this state should be made via queries from otherwise stateless tasks. This design makes it easy to scale and evolve the OS without whole-system refactoring, inspect and debug system state, upgrade components without downtime, manage decisions using machine learning, and implement sophisticated security features.”
Successful or otherwise, the OS-as-a-database application idea is unlikely to be Stonebraker’s last. After turning 80 in October, he tells The Register he is not about to slow down.
“I can’t imagine playing golf three days a week. I like what I do, and I will do it as long as I can be intellectually competitive,” he says. ®
MMS • Anthony Alford
Article originally posted on InfoQ. Visit InfoQ
OpenAI recently published a guide to Prompt Engineering. The guide lists six strategies for eliciting better responses from their GPT models, with a particular focus on examples for their latest version, GPT-4.
The guide’s six high-level strategies are: write clear instructions, provide reference text, split complex tasks into simpler subtasks, give the model time to “think”, use external tools, and test changes systematically. Each of the strategies is broken down into a set of specific, actionable tactics with example prompts. Many of the tactics are based on results of LLM research, such as chain-of-thought prompting or recursive summarization.
OpenAI’s research paper on GPT-3, published in 2020, showed how the model could perform a variety of natural language processing (NLP) tasks using few shot learning; essentially, by prompting the model with a description or examples of the task to be performed. In 2022, OpenAI published a cookbook article which contained several “techniques for improving reliability” of GPT-3’s responses. Some of these, such as giving clear instructions and breaking up complex tasks, are still included in the new guide. The older cookbook guide also contains a bibliography of research papers supporting their techniques.
Several of the guide’s tactics make use of the Chat API’s system message. According to OpenAI’s documentation, this parameter “helps set the behavior of the assistant.” One tactic suggests using it to give the model a persona for shaping its responses. Another suggests using it to pass the model a summary of a long conversation, or to give a set of instructions that are to be repeated for multiple user inputs.
The strategy of use external tools gives tips on interfacing the GPT model with other systems, with pointers to articles in OpenAI’s cookbook. One of the tactics suggests that instead of asking the model to perform math calculations itself, it should instead generate Python code to do the calculation; the code would then be extracted from the model response and executed. The guide does, however, contain a disclaimer that the code the model produces is not guaranteed to be safe, and should only be executed in a sandbox.
Another strategy in the guide, test changes systematically, deals with the problem of deciding if a different prompt actually results in better or worse output. This strategy suggests using the OpenAI Evals framework, which InfoQ covered along with the release of GPT-4. The strategy also suggests using the model to check its own work “with reference to gold-standard answers,” via the system message.
In a Hacker News discussion about the guide, one user said:
I’ve been hesitant lately to dedicate a lot of time to learning how to perfect prompts. It appears every new version, not to mention different LLMs, responds differently. With the rapid advancement we’re seeing, in two years or five, we might not even need such complex prompting as systems get smarter.
Several other LLM providers have also released prompt engineering tips. Microsoft Azure, which provides access to GPT models as a service, has a list of techniques similar to OpenAI’s; their guide also provides tips on setting model parameters such as temperature and top_p, which control the randomness of the model’s output generation. Google’s Gemini API documentation contains several prompt design strategies as well as suggestions for the top_p and temperature values.
MMS • Shailesh Rangari
Article originally posted on InfoQ. Visit InfoQ
Key Takeaways
- Compliancе is a foundation for еffеctivе risk management. When companies navigatе tricky rules and commit to doing things еthically, they have a particular еdgе ovеr their competitors.
- Transitioning from a “Compliancе-First” approach to a “Risk-First” mindset rеcognizеs that compliancе should not be viеwеd in isolation but as an intеgral componеnt of a broadеr risk managеmеnt strategy.
- A “risk-first” attitude is a philosophy that focuses on identifying, treating, and managing the highest compliance risks and prioritizing them through controls, policies, and standard operating procedures.
- A risk-first approach еnhancеs organizational rеsiliеncе and fortifiеs a foundation whеrе risk awarеnеss bеcomеs an inhеrеnt part of dеcision-making procеssеs at all lеvеls.
- Organizations that providе clеar and comprеhеnsivе guidancе on еmployееs’ rolеs in managing compliancе cultivatе an еnvironmеnt whеrе individuals arе еmpowеrеd to еxplorе and innovatе within wеll-dеfinеd paramеtеrs.
Introduction
Compliance is fundamental to modern business operations and integral to their success. It involves adhering to legal and regulatory requirements, industry standards, and ethical business practices. Compliance is crucial for organizations to manage risks, protect against legal penalties and reputational damage, and provide a competitive advantage. In today’s business landscape, where social responsibility and ethical behavior are more critical than ever, compliance has become vital to organizational success. Organizations can safeguard their reputation and ensure long-term sustainability by prioritizing compliance.
The significancе of compliancе еxtеnds beyond a mеrе chеcklist of obligations; it is a foundation for еffеctivе risk managеmеnt, acting as a shiеld against potential lеgal pеnaltiеs and rеputational damagе. In today’s businеss world, whеrе pеoplе еxpеct companiеs to bе socially rеsponsiblе and еthical, following thе rulеs bеcomеs еssеntial. When companies navigatе tricky rules and commit to doing things еthically, they have a particular еdgе ovеr their competitors.
A paradigm shift is undеrway as businеssеs еvolvе – transitioning from a traditional “Compliancе-First” approach to a more dynamic and forward-thinking “Risk-First” mindset. This cultural shift rеcognizеs that compliancе, whilе еssеntial, should not bе viеwеd in isolation but as an intеgral componеnt of a broadеr risk managеmеnt strategy. This еvolution is not mеrеly a concеptual adjustmеnt but a pragmatic nеcеssity, as organizations sееk to proactivеly idеntify, undеrstand, and mitigatе risks, еnhancing thеir rеsiliеncе and adaptability in an еvеr-changing businеss еnvironmеnt.
This еxamination divеs into the importance of companies adopting a cultural transformation. This shift involves shifting from a narrow еmphasis solely on compliancе to a broad and morе stratеgic еmbracе of risk.
Bеyond mеrе obligation, this shift fostеrs a culturе that mееts rеgulatory rеquirеmеnts and positions organizations to thrivе amidst uncеrtainty, bolstеring thеir long-tеrm sustainability as wе еxplorе thе complеxitiеs of this changе, wе uncovеr thе fundamеntal connеction bеtwееn compliancе and risk.
This exposition shеds light on thе way for organizations to mееt еxpеctations and go beyond thеm, ushеring in a nеw еra of rеsiliеncе, innovation, and lasting succеss.
Compliance Fixation
A compliance-first mindset prioritizes compliance with laws, regulations, and industry standards over other considerations in decision-making and operations. Organizations in densely regulated industries, such as financial services, transportation, and healthcare, often adopt this approach to ensure they meet their legal and ethical obligations and minimize the risk of penalties or legal action. It entails actively recognizing and resolving potential compliance issues while establishing and enforcing processes and controls to maintain conformance. This approach focuses on compliance with regulations and requirements and meets minimum standards to avoid legal and reputational consequences. In a compliance-first mindset, organizations view risk management as a cost center rather than a strategic opportunity. Organizations tend to lean heavily on a compliance-oriented posture due to one or more of the following reasons:
- Businesses and organizations with strong legal and regulatory obligations are less likely to face legal and financial penalties, reputational damage, and operational disruptions.
- A compliance focus helps organizations maintain a positive reputation and build trust with customers and shareholders by demonstrating a commitment to ethical and responsible business practices.
- A compliance-leaning posture helps organizations manage, mitigate, and transfer risks more effectively. This focus on compliance helps minimize the potential negative impact of risks on their operations and reputation.
- Organizations that focus on compliance strive to follow the applicable rules and regulations, which helps them avoid hefty fines and damaging legal consequences. Additionally, it can help mitigate risks and reduce the costs associated with risk management.
- Customers and shareholders perceive organizations with a compliance-focused approach as more responsible and trustworthy, giving them a competitive edge over those with less adherence to regulations.
While this compliance-focused approach has numerous benefits, it also presents several challenges that organizations must be aware of. Some of these challenges include the following:
- Inflexibility: A compliance-first mindset can make organizations less adaptable and capable of adapting to changing business and economic conditions. They must focus on meeting legal and regulatory requirements, which ultimately costs them the agility to respond to changing business landscapes.
- Bureaucracy: Adopting a compliance-first mindset can introduce a layered decision-making process that slows decision-making and operations and increases costs.
- Innovation: A compliance-first mindset can stifle innovation and creativity, as organizations may be less willing to take risks or try new things if they think it might put them in violation of laws and regulations.
- Limited perspective: Organizations with a compliance-first mindset may be so focused on meeting legal and regulatory requirements that they miss other vital risks or opportunities.
- Limited customer focus: A compliance-first mindset may lead to a lack of focus on customer needs, as the company may be more focused on meeting legal and regulatory requirements than on meeting the needs of its customers.
An overemphasis on meeting compliance requirements comes at a steep price and is detrimental to other important business goals. This preoccupation can lead to a narrow and rigid focus on compliance, resulting in a lack of innovation and risk-taking. It can also lead to a culture of fear and avoidance, where employees prioritize compliance over ethical behavior or customer satisfaction. While compliance is essential for legal and ethical reasons, a preoccupation with compliance can hinder organizational growth and development.
Case Studies: The Cost of a Compliance-First Mindset
There have been many high-profile cases where a compliance-first mindset has led to high costs and damage to organizations.
- Volkswagen: In 2015, investigators discovered that Volkswagen had installed software in its diesel engines to cheat emissions tests. According to their statements, the company’s compliance-first culture pressured employees to meet emissions targets at the expense of ethical behavior.
- Equifax: In 2017, Equifax suffered a massive data breach that exposed the personal information of millions of customers. The company’s focus on meeting its compliance goals rather than genuinely improving its systems and network security was one of the many driving factors behind the data breach.
Building Resilience
A “risk-first” attitude is a philosophy that focuses on identifying, treating, and managing the highest compliance risks and prioritizing them through controls, policies, and standard operating procedures. This approach helps prioritize and allocate resources to areas with the highest compliance risks. Organizations can develop targeted and efficient compliance strategies by assessing the likelihood and impact of each risk. Organizations can stay ahead of the curve with a risk-first compliance approach, ensuring they meet the highest compliance standards and avoid costly consequences by addressing the most impactful compliance risks. The advantages of a risk-first organizational mindset include:
- A risk-first philosophy can be used to identify, prioritize, and address financial effectiveness and compliance/legal, operational, and reputational risks.
- Improve resiliency to better respond to unexpected events causing disruptions and recover from them without causing a significant downtrend.
- Foster a culture of innovation, experimentation, and enhancements through calculated risk-taking and helping employees think outside the box for innovative solutions.
- Gain a competitive edge over their less risk-aware counterparts by equipping themselves to handle unexpected events and adapt to changing market conditions.
Establishing a risk-first culturе for compliancе involvеs instilling a mindset whеrе еmployееs prioritizе thе intеgral aspects of risk managеmеnt and compliancе in thеir day-to-day rеsponsibilitiеs. This proactivе approach еnhancеs organizational rеsiliеncе and fortifiеs a foundation whеrе risk awarеnеss bеcomеs an inhеrеnt part of dеcision-making procеssеs at all lеvеls. We discuss steps to help foster a risk-first culture for compliance:
- Businesses must ensure their staff comprehends the significance of risk management and compliance in their job roles. Governance, Risk, and Compliance (GRC) are the experts in this field. GRC must provide a precise description and illustrations of what constitutes a risk and how they interpret compliance in their organization.
- Businesses should create and maintain clear standards and guidelines for managing compliance risks and meeting regulatory requirements. These documents should be accessible to the entire workforce and revised periodically as changes in regulations or risks occur. This documentation, accompanied by regular training, will ensure that employees know their responsibilities and can take necessary steps to reduce compliance risks and adhere to regulations.
- Employees should receive regular instruction on risk management and compliance. This instruction should be tailored to each employee’s particular roles and duties so that they can recognize, assess, and reduce compliance risks in their workplace. This approach will help ensure employees have the aptitude to handle risk and abide by regulations, decreasing the chance of non-compliance and connected risks.
- Creating an open and honest communication platform can help employees express their worries about compliance risks and non-compliance. Encouraging and enabling staff to raise issues and ensuring the organization takes their concerns seriously can help identify and reduce compliance risks, preventing potential business damage.
- Organizational leaders should lead by example by following the risk-first attitude to compliance. They should inspire and appreciate those who prioritize risk management and compliance in their work without compromising the quality of deliverables. Leaders should set a positive example for their staff and promote responsibility and accountability for risk management and compliance.
- Businesses must regularly review and evaluate their risk management and compliance processes and identify areas for improvement. Encourage employees to suggest improvements and implement changes where necessary. This approach will help ensure that risk management and compliance processes remain practical and current, reducing the likelihood of non-compliance and associated risks.
- Recognizing and celebrating risk management and compliance successes can inspire and motivate employees to prioritize risk management and compliance. Sharing success stories and using them as examples can encourage employees to follow suit and maintain high-risk management and compliance standards.
Organizations that providе clеar and comprеhеnsivе guidancе on еmployееs’ rolеs in managing compliancе cultivatе a sеcurе foundation which fostеrs an еnvironmеnt whеrе individuals arе еmpowеrеd to еxplorе and innovatе within wеll-dеfinеd paramеtеrs. This proactivе communication approach еnsurеs a safе spacе for еxpеrimеntation, promoting a culturе of rеsponsiblе dеcision-making and adhеrеncе to compliancе standards, which is crucial for maintaining compеtitivеnеss and adaptability in thе facе of еvolving markеt trеnds. Furthеrmorе, fostеring a blamеlеss culturе еnhancеs еmployее еngagеmеnt, rеsulting in a morе dеdicatеd and compеtitivе workforcе.
Whеn individuals fееl valuеd and еncouragеd to takе calculatеd risks, a sеnsе of ownеrship and purposе еmеrgеs, contributing to improvеd dеcision-making and lеadеrship еffеctivеnеss. Embracing challеngеs providеs a compеtitivе еdgе in a dynamic businеss landscapе and positions companies to lеad progrеss by promoting еxpеrimеntation, еncouraging trial and еrror, and fostеring a culturе of continuous growth.
Building a culture prioritizing risk management and compliance requires determination, teamwork, and continual improvement. A risk-first attitude on compliance can decrease risks, ensure that regulations are followed, and protect an organization’s standing. Organizations must consider risk first in the present ever-changing and unpredictable climate. By following the basics of risk management and compliance, businesses can be better prepared to recognize and take care of potential risks, remain compliant, and be successful in times of doubt. Establishing a risk-first culture is beneficial to a company and essential for guaranteeing a robust and thriving future.
MMS • Audrey Troutt
Article originally posted on InfoQ. Visit InfoQ
Transcript
Troutt: I’m Audrey. CTO at Tomo, which is a real estate and fintech startup. The CTO is very new. I want to talk about growth. I want to talk about transitions in your career. This is very top of mind for me, because I became a CTO about one month ago. It’s a big deal. This is a dream job. This has been my dream job for a long time. Before that, for the last two and a half years, I was VP of Engineering at Tomo. I also loved being a VP. I built the engineering organization, and our technology platform, and the engineering culture all from the ground up. I was a great VP. Now, as CTO, I find myself in this new role, in this new position, with new challenges and opportunities ahead. As I prepare myself to grow into this new role and grow out of my previous role, I’m thinking about and reminded about all the transitions that I’ve made in my career from when I was just starting out as a software engineer, all the way up through all the steps until now. I think about where I need to grow, where I need to grow in terms of depth, and how the breadth of my understanding has grown. I need a deeper understanding and new parts of the business that I haven’t been involved in before. The breadth of my responsibilities certainly has changed from my previous role. I know that one of the pitfalls that I need to avoid is holding on too close to those things that I loved and that I was really good at, at being a VP. Because holding close to that will only hold me back from growing into a great CTO. These are lessons that I’ve learned many times through my career. Why I’m here is to tell you a lot of stories about those lessons that I learned. I want to share those with you in hopes that it’ll be helpful as well.
Why Are You Here?
I hope that’s why you’re here. I talked to a few of you to find out why you’re here as well. You might be here because you’re curious. What do people do all day if they have the title of engineer, or senior engineer, or tech lead, or platform lead? I’m going to talk about that. We’re going to start there in talking about titles and a framework. I want to define some terms that I use when I talk about career growth with engineers, that translate a little bit better across companies. Because as I think we all know, titles can be really slippery when you’re looking at your growth across different companies and looking to coach folks from other places. Overall, I’m hoping that you’re here to gain a model for thinking about your growth, and your career, and thinking about the growth of other engineers that you’re going to be mentoring. This talk is about some of the surprising inflection points that I’ve seen, and I’ve discovered as I’ve been going from one level to the next. I want to give you a heads-up about what to expect, and some advice along the way. I want to share with you overall how important it is to grow others, to grow leaders in order to grow yourself in your career. In my experience, these lessons are a lot easier to see in hindsight. I’m hoping it’ll be helpful to you if I share these as my mentors have done for me over the years.
Model for Growth (Definitions)
We’re going to start with some definitions. Companies have career ladders with titles like this, with engineer, senior engineer, principal engineer, staff engineer. Does principal come before staff or is that after? There are many ways you can define these levels. I’m sure we have all experienced difficulty in trying to match these up across companies. I can’t fix that. I’m not going to fix that for you. A different way to look at your career ladder is by talking about the role you play on your team and how that changes as you level up and grow. Here’s how I like to frame it. You start out as a trusted contributor. Your objective is to learn and become a trusted contributor on your team. You got to build your technical foundation for the languages that you’re using, the platform that you’re working on, the tools that you use as a developer. You need to learn about the coding standards and the process that your team uses. You need to learn about the domain that you’re working in, and the product that you’re working on. You get to a point where you can take a well-defined user story or a ticket that’s really well-defined, or a bug report, and be trusted to fix it independently. That’s what it means to be a trusted contributor on your team and to your code base.
You get to a point where you may not be able to always articulate why but you know what the right thing is to do at any given time. You might still need some help with complex issues, or when the architecture isn’t really well suited for the problem that you’re trying to solve. You can add a ton of value as a trusted contributor. Then, as a feature leader, you’ve reached a point where not only can you independently solve even complex or novel problems, but you can also take a feature or product request, and help to dig in on the requirements and break down the work into an implementation plan that you and your team can follow. You can lead feature development for your team for well scoped projects, for well scoped features. Feature leaders are at a point in their career where they can start to raise the bar, they’re thinking about raising the bar for quality or test coverage, or tooling, or documentation, other important concerns about our craft. At this level, they’re often involved in both asking and answering that question why, not just what we need to do, but why that’s really important. Still keeping mostly within the scope of your team there.
Then there’s team leader, or tech leader. As you grow, you take on the role of a tech lead or a team leader. At this level, you typically have a deep and broad understanding of your technical stack. You can lead system design across systems. You’re likely still writing code, and you might even be leading your team projects, you’re usually leading engineering projects for your team. Potentially, you’re responsible for leading your team’s process as well, like as a scrum leader. Even if you’re not leading the process, engineers at this level typically have a big influence on how engineers collaborate on their team in terms of process and culture and tools and patterns and coding standards and things like that. Then when your company reaches a certain size, there is yet another individual contributor level that’s often needed. I call this a platform leader. These are engineers who are, formally or informally, they have responsibility for leading broad architectural decisions of their platform, or they’re leading large systems across teams. They might even be leading the broader technical direction for their entire company, if you’re part of a smaller company. They might oversee large architectural changes, or lead the review and adoption of new patterns for the code base across teams. These are often engineers who are involved in research and development or driving early strategy conversations about the business, about what’s possible, where can we go, where can we innovate? These can even be industry leaders who are speaking at conferences like this. There was one just done before me, as well, about the direction we should all be heading in this industry across companies.
What Do People Do All Day?
You see these terms and how I use them. Another way of looking at it and understanding what these mean, is by thinking about what would they do in a typical day? An engineer, of course, does a lot more than just writing code. You do peer code review. You’re going to be testing things. You’re going to be deploying things. You might be triaging some bugs. You’ve got some team process to participate in, like estimating, and planning, and doing retros, and demos, and things like that. As a feature leader, you’re going to do all of that, but a trusted contributor does. Because you now have responsibility for at least some feature development, in a typical day, you might spend some time digging in with your product partner to understand those requirements and write up some tickets. You’re going to maybe break some tickets down and get prepared to do a refinement for those. You’re going to spend some time also because you’re at that level in your career where you’re thinking about how we work. You’re going to spend some time thinking about the code base and how it could be improved. You’re going to be thinking about documentation. They’re probably also spending time mentoring the trusted contributors on their team. Tech leads and team leads often do all that and more. They’re responsible for driving engineering projects. They’re probably working even more closely with their product and business partners to define and drive projects for the team. They’re expected to be deriving our best practices and developing our strategy as a team for the long-term health of that team, or that platform, or the feature, or whatever scope your team is responsible for. They’re probably participating in not just mentoring, but also interviewing as well for trusted contributors and feature leaders and future teammates.
For platform leaders, the day-to-day is often very different from any of the other levels. Again, this varies widely company to company. They may not even focus or be involved in the output of a specific team or a specific project. Their whole role might be much more about moving the needle on larger engineering initiatives that might span multiple teams, they might span multiple systems. They might spend their day being consulted by different engineers and teams that need input and direction. The mentoring and coaching they do is probably a lot more focused on team leaders and tech leads. They’re responsible for deeply understanding the business’s long-term goals and defining the technical strategy that will get us there, which might include maybe spinning off new projects, or making broad technical changes, or also cultural changes or cross-team process evolution too. Part of the reason why engineering career ladders and titles are so confusing is that these roles overlap. They overlap and they look different on different companies, on different teams even within the same company, and for different people at different times.
Scope of Responsibility and Expertise
I assert, a perfect mapping does not exist. I don’t think that my model is perfect. There certainly are exceptions where a bullet point will fall to the left or the right of what I have listed here. I do find this model helpful, which is of course why I’m sharing it with you. What I’m trying to illustrate is that as you grow, you increase the breadth and depth of your expertise and responsibility. First, you might grow depth and breadth across technology, but then that expands to include people, mentoring people, and practices, and then process and your business. Breadth and depth in all of these dimensions. When you start out, you’re focused on your own learning. You start out as a trusted contributor, you’re on one team. You’re focused on your own learning and often in one layer of the tech stack because you’re just getting started. Then as a feature leader, your responsibility broadens. You might work on multiple projects. You’re usually still embedded on one team, but your understanding of the code base and your product broadens so that you’re familiar with how things work in different areas. Your expertise deepens within your platform, and goes deeper into new areas of your application stack and your code base so that you can solve those larger, more complex problems. Then as a team lead or a tech lead, you broaden your responsibilities again, often to include the whole team, the whole tech stack, multiple simultaneous projects. Broadening goes out. You most certainly will deepen your skills, and your technical understanding in multiple areas, and your skills around influencing culture, and process, and planning for the team in order to make them successful. Then, as a platform leader, your scope broadens again, to include more teams, more initiatives, more responsibility for charting the path into the future for your team or your company. You need to deepen your understanding of your business and the market in which you operate, and the industry that you’re working in, in order to guide the engineering strategy in the right direction.
As you grow from level to level, in your career, you tend to alternate like this between increasing the breadth and depth of your responsibilities and expertise. Sometimes to get to the next level, you really need to do both, grow in both breadth and depth, but it is hard to grow in two dimensions at the same time. I find it helpful when I’m coaching engineers to talk in these terms of breadth and depth and help them figure out where they could focus on stretching the most. Do they need to broaden their responsibilities, broaden their understanding, or do they need to focus next on building depth in specific areas? Sometimes I like to show this in terms of an engineering rubric. Here, as you move from level to level, so in this case, row to row, you can see how your skills deepen, the colors get darker, the numbers get higher as you go level to level. Also, your responsibilities and skills broaden as you fill in more columns. Sometimes it’s helpful to illustrate it in this way.
Inflection Points: How Your Role Changes
In my experience, in addition to the breadth and depth, I’ve also seen inflection points when making these particular career steps from trusted contributor to feature leader to team leader to platform leader. The first one’s a little bit obvious, maybe. It happens earlier in your career, when you go from being a follower to being a leader. You can no longer look to someone else to define the architecture, or choose the right algorithm, or figure out what’s going on in production that’s making all my data fall on the floor. The second inflection point happens when you start to take a leadership role on your team, even as an IC. This one I call player to coach. This is a moment when you realize you can’t be on the field all the time. You need to focus your energy on setting your team and your teammates up for success. I’m going to talk more about what that looks like for me and how that works. There’s one more inflection point, and you hit when you’re a leading platform engineer. This takes you away from the day-to-day operation on teams, the day-to-day development that’s happening. I call this inflection point, actor to director. This might at first seem a lot like players or coach, like a player is on the field and the actor is on the stage. There’s a subtle and important difference in that the director, and think of this as like a director in a play, they do all of their work in planning and rehearsal. They’re not standing on the sidelines making calls the way a coach does during a game. I’m going to share some more examples of what that has looked like for me.
What’s It Like to Take Each Step?
What is it like? What was it like for me? What have I seen it be like for others? Over again, you start out as this awkward newbie. You work really hard, and you become awesome at what you do, only to have to give it up and start all over again, at the next level in your career with a new skill set, new responsibilities. At each new level, it’s hard to let go of the things that you were good at before, but you have to, you must. What I hope to share with you in this talk is how through it all, the key to success is to focus on growing others to take your place, to be able to do what you can do, so that you can take on new challenges, new opportunities, new responsibilities, and further deepen your expertise. Before I get into storytelling about my many past mistakes and lessons learned, I wanted to pause and say, I’m talking a lot about growth. What I mean by that isn’t just about getting a bigger salary or a bigger title. Those are great. You should get those as you grow. It’s really about learning. It’s about challenging yourself to be able to have a bigger impact and to learn more so that you can add more value to your business, to your industry, to your community.
Growing to Feature Leader
We’re going to talk about growing from a trusted contributor to a feature leader. I think most of you are beyond this point in your career. I think it’s helpful to start at the beginning, especially as you’re mentoring others who are taking this step in their career as well. I want to challenge you to think about this if you’re far beyond this, do you remember when you were reaching for that first senior engineer role? It was admittedly a long time ago for me. Here’s a reminder of what those two levels looked like for me. I definitely learned some lessons here. I’ve seen those repeated in engineers that I’ve coached over the years. When I started out as a trusted contributor, I had an amazing team around me. They were all super senior engineers, and they would help digest the products requirements. They would break that down into a really great actionable plan. They would point out what’s going to be the easy parts and what’s going to be the hard part. They would carve out just the right tickets for me right, for the things that are appropriate for me at my level. Then for anything more complicated, I could pair up with them. I learned from them through this process. Even when they carved things out for me, it felt amazing. I hope you remember this too as a trusted contributor, to be so set up for success, whether through pairing with them and learning, or for them carving out tickets that were just right for me, and just appropriately challenging. It felt really good, and it was fun. At some point, you are needed to solve problems that are bigger, when anyone can carve out for you, and there isn’t anyone who can break it down. At some point it is your turn to make the plan. This isn’t easy to do, especially the first time. I remember struggling with this. I’ve seen a lot of other people struggle with this as well.
There are two primary ways I see people struggle when they’re asked to put together an implementation plan and lead feature development the first time. They do one of two things, they freeze, or they hide. By freeze, I mean that people would freeze up or flounder when they’re asked to figure it out. Figure out this feature, how should we build this? What do we really need? They freeze. It’s like writer’s block. It’s usually just a sign that they need more practice building muscles. Building muscles for how to dig in on requirements. Building muscles for how to dig into the code and figure out how this feature is going to fit in, and coming up with a solution or an implementation plan, and sharing that. This is something that you can practice. For this I recommend pairing up, just pair up with a more senior engineer and get more reps, get more practice until it becomes easier. It is totally normal and ok to struggle. The other way I see people struggle when they’re just starting out as a feature leader is they hide. This is when they really do understand the requirements and they do have an idea for how to solve it, but they’re either uncomfortable putting their ideas out there for others to follow, or they just struggle to articulate their implementation ideas in a way that other engineers can follow, they can join in on. These are the ones that often will plow ahead if you let them. They will just keep implementing everything by themselves because it’s a lot easier. They understand that I got a plan, I’m just going to go. Maybe this is where like cowboy coders come from. They just get stuck at this hiding point. What I do know is this is a trap. Trying to always go it alone means feature development can only move as fast as you, which may not be fast enough for your business or your team, long-term.
Learning to articulate technical ideas and define work that others can join in on is a muscle. You can learn it the same way we’ve all learned it. We weren’t born knowing how to do this. Maybe you need to learn some effective ways of drawing technical diagrams, or documenting APIs, or just succinctly capturing technical ideas outside of code. It helps here as with the folks who are freezing to pair up and get more practice on this and feedback by working closely with other more senior engineers. Also, in this case, for folks that struggle in this way, I find it helpful to take up every opportunity you can to practice these skills even outside of feature development. Volunteer to write documentation. Volunteer to write runbooks, or onboarding guides. Volunteer for that. The important thing is you’ve got to learn to pass the ball. The ball in this case is the technical understanding needed to do the work. To grow as a feature leader, you have to increase the depth of your technical and product understanding, of course. You need to increase the breadth of your skills to include communicating technical ideas to your teammates, in a way that they can join in and follow along. It is really important to break through this growth point, because what happens is that as a feature leader, you become a force multiplier. You can create clarity. You can create direction for your teammates, the same way those senior engineers did for you when you were starting out as a trusted contributor. This is the stage where you make the transition from follower to leader.
Growing to Team Leader
Now you’re a senior engineer, you’re a feature leader, what comes next? That next level is again what I call tech lead or team lead. Here’s how I think those are different. This is when you’ve reached the stage when your technical and product knowledge is so deep and so broad on your team that you are the one everyone looks to, to handle the gnarliest of problems. I remember one of the first times I found myself as a new tech lead, and I was on a team and I was surrounded by much more junior engineers. When we’re kicking off a new project, we’re adding a big new feature to our SDK or our app, just like as a feature leader, as a tech lead you work really closely with your manager and your product and business partners. You figure out what needs to get built. You start planning how it needs to get built. You come up with a technical design. You know how it will need to fit within the existing systems. You know what’s going to be the easy part, and you know what’s going to be the hard part. My initial approach, and this, again, is very common, was to keep all of the challenging, tricky work to myself, and then farm out only those much simpler things to the more junior engineers around me. Have you done this? We’ve all done this. It just feels practical in the moment. There’s always a need to move as fast as possible. You’re the fastest one for the job. You can’t even always put down a plan and articulate exactly what needs to do. When things are more complicated for the more delicate parts of a new feature development, you don’t have time to write down everything you know about the code base along with a very detailed implementation plan for someone to follow. You literally can’t do that. Sometimes these things need to be figured out as the feature is getting built. You got to do some prototyping. You have to do some research, find the right algorithm, or framework, or pattern to solve the problem.
In many cases, this approach works fine. The feature gets built. The code is great. The code might even end up better because you personally were in there making improvements while you were building this tricky feature. You probably learned something by building this. You’re going to learn something more about your code base, or your architecture, or this product, but you haven’t grown. All of you tech leads and team leads, you can keep doing this forever. You can keep personally building every fancy, innovative, new feature that your product needs, but that is all you’re going to be doing. There’s a similar trap that I fell into many times as a lead engineer, when there are issues in production, an alarm is going off, data is not flowing, errors are being reported. My initial approach as the person who knows everything about everything, is to jump on it and fix it. Again, this works. Problem solved, very quickly. We can all go back to bed. The problem is that it left me being the person that was needed every time the system broke, every time any system broke that I had built. It’s the same trap again. This is not the last time you’ll see it either. This is a pattern I’ve seen again with the tech leads that I’ve coached over the years, by withholding the gnarliest, least well-defined, and frankly, most interesting work for yourself, you’re depriving the people around you of an opportunity to grow. You may not see it at the time, but you are depriving yourself of an opportunity to grow, too. It’s not uncommon for this bottleneck situation to only be resolved when that senior engineer finally burns out and unexpectedly leaves the company and the rest of the team is left to fill in the gaps. You know, you’ve all been there. I’ve been on both sides of that. It is not a good time.
What do you do? What does it look like to grow others in this situation instead? How do you grow from that? Let’s go back to feature development. Let’s say after working with my manager and my product partner, I understand what needs to get built. What I learn to do is to write up just enough: just enough guidelines, just enough context, just enough warnings about what to watch out for. I would carve those out and I would be able to give those individual features to another team member. I would still define and refine user stories, but I wouldn’t assign all of them to myself. I wouldn’t assign all the tricky ones to myself. I had to learn to step back and let the engineers around me take a stab at figuring out those trickier parts. I’d still tell people what they needed to do sometimes, but I also made a point in those cases to explain why. Why was I suggesting that approach, so that they could think for themselves and make decisions if needed during implementation without having to come back and ask me every time if it’s ok. Of course, I would still pair program and code review. I would jump in if really needed, I was still there on the team, but I would give them space to try. They might fail but they’re definitely going to learn from that.
The same is true with production incidents, and on-call issues. I had to learn how to document enough information so that anyone could see, for the most common alerts and errors, they could see what is happening and know where to begin troubleshooting and resolving it. Then I’d set up an on-call rotation, and let people be on the hook to responding for these alarms. I’d still be there if needed, but I would let them try. You know what happened? Features still got built. Incidents still got resolved. Yes, sometimes it took longer, maybe, but sometimes the result was even better than I could have planned. For feature development, one time, there was a more junior engineer, and he knew a lot more about a new framework than I did, that I hadn’t considered. He came up with a great, clean, scalable solution for our projects, because I’d given him that piece and he figured it out. For incident triage, one time another engineer used their on-call time to automate a bunch of steps that I had previously been running more manually with scripts. My team was happy. In my experience, this is the kind of team that people love being a part of. This is a place where they’re supported, and challenged, and given room to grow as leaders and take ownership of new problems and new systems.
What happened is that more people knew how to do what I knew how to do, which meant that, as a team, we can do more in parallel, and more perspectives resulted in us coming up with better solutions. Even better than that, not always being on-call and heads down building everything, freed up more of my time to look at our overall architecture. I could look at our technical operations. I could look into our tech debt. I could plan and implement improvements that would make our apps more stable, and our team more productive. I had more time to make documentation and to do knowledge sharing so that my potential leaving one day wouldn’t be such a pain for everyone, and we can onboard new developers more easily. Not only did my team learn to operate like a well-oiled machine, but because I was doing more planning and documenting and handoffs, I deepened my skills for communicating technical ideas at a higher level. I had time to take those ideas and plans, and influence other teams across the company. Even outside the company at conferences like this, which broadened my influence. I had time and headspace to join meetings with senior leaders about new strategic initiatives, or get involved earlier and more often on the planning that was done even further out into the future. This was a big part of my growth as a tech lead into more of a platform leader, which is what I want to talk about next.
What I learned from all of this is to grow other engineers to become leaders as well, to be able to plan and implement features independently. To triage production issues. To be able to think and act without me. With your bottleneck removed, your team can do more in parallel. More software and better software is going to be built because you are able to make this step. Growing other engineers, and most importantly, letting go of being the only go-to expert for everything, gave me time to learn about and influence more projects and more systems and work on larger technical problems, which broadened my responsibility and deepened my expertise. In this role, you have the experience and the context to see the big picture for your team, and to shape the future of your code. That is what you should be doing. Figuring out implementation details and writing the code and reacting to errors in production is something all engineers can eventually do, let at least some of that go. Remember, above all, it’s not just about delegating work that’s beneath you. That’s not what this is about. You have to give your teammates opportunities to plan and implement, maybe fail, but definitely learn while supporting them and providing your overall technical direction.
Growing to Platform Leader
We’re going to talk about growing beyond a team lead into a platform leader. Again, not all companies are large enough to have a role like this, like a platform leader. Some companies have architects that are in a role like this. This may look different for you, but this role is when you are near or at one of the highest technical leaders in your company, and you’re needed to drive innovation and technical change across multiple teams, across potentially your entire organization. Sometimes, again, you do this by being embedded on a team. Sometimes you are floating in between and being consulted by many projects and many teams. When I first had a role where I was a leader across teams, I still worked a lot like a team lead, because that’s what I knew. It’s what I was really good at. I paid a lot of attention to process. I paid a lot of attention to technical plans and requirements and code review. I still wrote a lot of code. I wanted to make sure everyone knew what needed to get built, what the plan was for deploying our features to production across teams. I owned a lot of that process across teams. I ran a lot of the meetings. I was creating documentation. I was reporting on our progress. I was doing deployments and testathons, and prod maintenance. I was celebrating when we finished things. None of these are bad things to do. The problem is that by taking all of the responsibilities for coordinating development across the platform onto myself, it meant that every time there was a question about a cross-team requirement, or a big implementation decision, or something going wrong in production that didn’t obviously go to one of the teams, everyone looked at me. I was keeping too many challenges and responsibilities to myself, and depriving the people on the various teams an opportunity to grow.
I think you all know what that is. That’s a trap. That’s a bottleneck. Instead of being a bottleneck just for feature implementation, and maybe incidents on one team like before, now it was a bottleneck for broad technical decisions and cross-team process. Even as I grew into my role, I was overwhelmed by the number of meetings I had to attend, and being consulted and running a lot of the process. It made it really hard to take a day off, let alone thinking about anything except the day-to-day operation of the platform. It meant that when I had to write a new project proposal or research a new idea, or just do bigger thinking, I struggled to find the time or the energy. I was burning myself out. What does it look like to grow beyond this? The trick was to grow the whole org to be able to make technical decisions, to run our cross-team release process without me if needed. I had to grow tech leaders and team leaders. I had to trust them to run things and even coordinate amongst themselves.
One of the best pieces of advice I got when I was in this point in my career was to focus on doing what only I can do. Focus on doing what only you can do. For every meeting, for every task that needs to get done, for every challenge that needs to get addressed, ask yourself, is this something that only you can do? If not, you should be helping somebody learn how to do that. I had to coach more of our engineers to think like tech leads, to talk about not just what but why. I needed to invest more in documenting and getting others to document our architecture, and our policies, and our best practices. I needed to do more training and knowledge sharing, generally. I had to automate processes where possible to make the right thing to do the easy thing to do. Like automated builds and releases, and made checklists for technical interviews and onboarding and releases. I still had to make sure that I set boundaries and provided enough context to the teams for the initiatives that I was pushing across the organization. I also had to trust the teams and reinforce this culture where it’s safe, and ok to act and maybe fail, because we will learn from it every time and improve. I’m not usually so big on sports metaphors, but this is really helpful. This is where you have to learn to identify as the coach, not the star player. You have to take yourself off the field, and coach your teammates and the engineers in your organization to win. What happened was that I was able to avoid burning out. I was able to start looking further into the future and think about how our platform needed to evolve, and what opportunities we had ahead to bring innovation into our product.
There are a bunch of pitfalls at this level of platform leader. I’m going to dive into another one. I think there are three more left. The next pitfall comes, when as part of this leadership that you’re doing across the platform, involves some of the longer term and forward thinking, and things like big organizational changes. You might be planning a big reorg. You might be part of that planning. You might be part of planning a big technical platform change, which is going to impact the kind of code that people are writing on a day-to-day basis. A difficult step for me in this transition to this role, when this planning came up, was to be able to pause on thinking about people as individuals, and to be able to start thinking about what are the right resources, what are the right organizations to solve this problem, to meet our goal. In my early days in this role, I was in a thankfully small meeting with senior leaders about a big future staffing change. It was actually a staffing change and a technical platform change. I could not stop thinking about and mentioning out loud how each individual would like or not like the changes that we were proposing. It was after that meeting that one of my mentors pulled me aside, and he taught me about this concept of people-ing versus process-izing. It’s not that you stop caring about people. I never stop caring about people. I can’t. You have to be able to take a step back and look at the bigger picture, and to strategize, and think, and problem-solve on an organizational level. You need to be able to think at the level of your whole platform, of your whole organization. If you can’t do that, you’re not going to be able to effectively lead at this level. This one was hard to see until someone pointed out to me that I wasn’t doing that. That’s an important lesson to learn.
You not only need to think at the level of your organization, but you also need to get comfortable acting at that level too. Another big surprise for me was how hard it would be not to just jump in with a team or a tech lead when they’re challenged with a specific technical issue or a process issue. I would just want to jump in with the code with them and try to fix it. That’s a trap, too. At a certain level, you can’t personally fix every team and project problem. You should be looking for patterns of struggle across the platform and proactively identifying ways to prevent them. You are there to inspire, to align, to support, to coach the engineers and team leaders and tech leaders, they’re actually doing and leading the work. Make it easy for them to do the right thing in their role. It’s like they’re the actors in the play, and you become more like the director. When you’re a team lead working within a large organization, you’re handed a roadmap, and to a certain extent, the rules of the road. It’s your job to rally your team to get them to do what’s needed, to execute on their plan. That’s like an actor who’s given a script and staging instructions. It’s up to the actors to really excellently perform the parts. There’s a lot of creativity in that. That’s what the role is like. As a platform leader, you’re like the director. It’s a much more challenging role, because in some ways, not only are you off the stage, but you also have to come up with the plan in the first place. What are those rules of the road? What are those staging instructions?
There are infinite directions you could go into as an engineering organization. There are times when everyone’s going to look to you to decide. Also, leading at this level is a lot more about influence than even I originally expected. You’re going to work with engineers and team leads and managers that don’t report to you. You will also need to partner with leaders across many different departments: design, product, marketing, IT, operations. It’s not possible to achieve your goals at this level without being able to build alliances and influence others. You got to get other platform leaders, other executives, other VPs on board with your ideas. It’s like, if you’re the director of the play, but the lights and music and costumes people, they’re not accountable to you, and they have different agendas, which makes it really hard. Again, the advice I gave myself as a platform leader, and that I would give to you, is to focus on growing great tech leads, and team leads, and let them focus on the day-to-day execution, so that I can bring all of my energy, and creativity, and experience to planning out and influencing the technical, organizational, cultural, and operational changes that we need to really nail our goals.
Summary
You’ve heard a lot of my stories. You have a model for thinking about your career, for thinking about the careers of those that you’re mentoring, as they go from trusted contributor to feature leader to team leader to platform leader. Now you know there are things you’re going to have to let go of, and that is really hard. You’ll go from follower to leader, from player to coach, and then from actor to director. At each transition, letting go of what you’re good at, and growing others to be good at that instead, is what gives you the time to grow into your new role. As you grow in your career, you can think about expanding the breadth and depth of your responsibility and expertise. If you aren’t sure which direction to grow in, you can think about, how can I add depth to my expertise? Or, how can I broaden my responsibilities? Growth can be so painful, but it can also be tremendously rewarding. As an engineer, even as a trusted contributor, I love fixing a gnarly bug or writing beautifully expressive code, or shipping an important feature all by myself. I was even more deeply satisfied when I finally realized I could be a great team leader. I could create exactly the kind of team culture and healthy engineering practices that led to a sustainable and robust and flexible architecture just like I always dreamed of.
As a top engineering leader, as a platform leader, I was deeply satisfied to create a culture and an architecture and establish cross-team processes and a roadmap in which dozens of engineers and leaders were able to grow and do satisfying work, and add tremendous value to our business. If like me, you like to build technology that delivers tremendous value and you love to build culture that makes people’s jobs and lives more satisfying, you can do that more as you grow others and grow yourself. That’s what you get. As you grow, you influence more of the technology and people and shape the world around you. Focus on doing what only you can do in your position. Everything else, coach, delegate, support your teammates to take care of too. Don’t deprive them of an opportunity to grow.
See more presentations with transcripts
Java News Roundup: Jakarta EE 11-M1, Payara Platform, Quarkus Release Plan, Spring Releases
MMS • Michael Redlich
Article originally posted on InfoQ. Visit InfoQ
This week’s Java roundup for December 18th, 2023 features news highlighting: Jakarta EE 11-M1 and GA release plan; Payara Platform December 2023 release; point releases for Spring Boot, Spring Cloud and Spring Security; Quakrus release plan; and CVE-2023-46131, a Grails data binding vulnerability.
JDK 23
Build 3 of the JDK 23 early-access builds was made available this past week featuring updates from Build 2 that include fixes for various issues. Further details on this release may be found in the release notes.
JDK 22
Build 29 of the JDK 22 early-access builds was also made available this past week featuring updates from Build 28 that include fixes to various issues. More details on this build may be found in the release notes.
For JDK 23 and JDK 22, developers are encouraged to report bugs via the Java Bug Database.
JavaFX 22
Build 23 of the JavaFX early-access builds was made available featuring updates from Build 22 that include fixes to various issues.
Jakarta EE
In his weekly Hashtag Jakarta EE blog, Ivar Grimstad, Jakarta EE developer advocate at the Eclipse Foundation, has announced that the first milestone release of Jakarta EE 11 has been made available to the Java community. The goal of this release is to verify that the build chain was well established and provide the API artifacts to all implementers of Jakarta EE. Details for each profile may be found in Jakarta EE Platform 11-M1, Jakarta EE Web Profile 11-M1 and Jakarta EE Core 11-M1.
Grimstad also provided an update on the status of plan reviews for the specifications that will provide updates for Jakarta EE 11, scheduled for a GA release in 1H2024:
- December 2023: Milestone 1 providing milestone releases for all specifications that have planned updates for Jakarta EE 11.
- February 2024: Milestone 2 providing final versions of specifications in waves 1 to 4 and updated milestone versions for the remaining specifications.
- March 2024: Milestone 3 providing final versions of specifications in wave 5 and updated milestones for the remaining specifications.
- April 2024: Milestone 4 providing final versions of specifications in waves 6 to 7.
Further details on Jakarta EE 11, including the specifications classified in each wave, may be found in the release plan.
Eclipse JNoSQL
Version 1.0.4 of Eclipse JNoSQL, the compatible implementation of the Jakarta NoSQL specification, has been released featuring: fixes for constructor and generics type handling to ensure a more seamless experience when working with Eclipse JNoSQL; enhanced handling of null
values in embeddable documents; and change in the package name to avoid duplicate names in different modules. More details on this release may be found in the release notes.
Spring Framework
Versions 3.2.1 and 3.1.7 of Spring Boot deliver improvements in documentation, dependency upgrades and notable bug fixes such as: an instance of the HibernateJpaAutoConfiguration
class should be applied before DataSourceTransactionManagerAutoConfiguration
class because the former imports required beans; an IllegalStateException
from closing a ZIP file due to the StaticResourceJars
class closing JAR files from cached connections; and child contexts created with the SpringApplicationBuilder
class executes the parents runners. Further details on these releases may be found in the release notes for version 3.2.1 and version 3.1.7.
Versions 6.2.1, 6.1.6 and 5.8.9 of Spring Security have been released featuring bug fixes, dependency upgrades and new features such as: document that the Shibboleth Repository is required for support of the Security Assertion Markup Language (SAML); integrate caching of the HandlerMappingIntrospector
class; and a resolution to the OAuth2 Resource Server exposing server information. More details on these releases may be found in the release notes for version 6.2.1, version 6.1.6 and version 5.8.9.
Spring Cloud 2021.0.9, codenamed Jubilee, has been released providing bug fixes and upgrades to sub-projects such as: Spring Cloud Commons 3.1.8; Spring Cloud Starter Build 2021.0.9; Spring Cloud Kubernetes 2.1.9; and Spring Cloud Netflix 3.1.8. This release is based on Spring Boot 2.6.15 and is compatible with Spring Boot 2.7.18 and 3.0.13.
Versions 1.1.1 and 1.0.4 of Spring Modulith have been released to deliver bug fixes, dependency upgrades and improvements: avoid potential duplicate inclusions of the ModuleTestExecution
class; and exclude Spring AOT classes from architecture verification as they might otherwise introduce dependencies to application components considered module internals. Further details on these releases may be found in the release notes for version 1.1.1 and version 1.0.4.
Versions 1.2.1, 1.1.4 and 0.4.5 of Spring Authorization Server have been released featuring bug fixes, dependency upgrades and a new feature in which the org.webjars
dependencies were removed from the demo-authorizationserver
sample application. More details on this release may be found in the release notes for version 1.2.1, version 1.1.4 and version 0.4.5.
The release of Spring for Apache Kafka 3.1.1 ships with bug fixes, improvements in documentation, dependency upgrades and new features such as: minor improvements to the listeners associated with the MessagingMessageListenerAdapter
class; a resolution to defects in perceived counterintuitive default methods in the ConsumerFactory
interface; and improvements to the DefaultKafkaHeaderMapper
class to avoid any potential NullPointerException
exceptions. Further details on this release may be found in the release notes.
The release of Spring for Apache Pulsar 1.0.1 provides bug fixes, improvements in documentation, dependency upgrades and improvements: a more convenient way to use the @ReactivePulsarListener
annotation in streaming mode with Spring messages; support for tombstone records with the @PulsarListener
annotation; and a deprecation of the (Reactive) PulsarListenerEndpointAdapter
and ReactivePulsarListenerEndpointAdapter
classes in favor of default methods defined in the ListenerEndpoint
interface and its subinterfaces for improved custom implementations of ListenerEndpoint
. More details on this release may be found in the release notes.
The release of Spring AMQP 3.1.1 delivers bug fixes, improvements in documentation, dependency upgrades and new features such as: elimination of the synchronized
keyword in the BlockingQueueConsumer
, RabbitTemplate
and RabbitAdmin
classes; and a resolution to a new ObjectMapper
instance of the Jackson2JsonMessageConverter
class not aware of the module supporting JSR 310, Date and Time API. Further details on this release may be found in the release notes.
Payara
Payara has released their December 2023 edition of the Payara Platform that includes Community Edition 6.2023.12 and Enterprise Edition 6.9.0. Both editions feature bug fixes, component upgrades and improvements: enhancements in the Payara Bill of Materials (BOM) for version consistency with the Payara API dependency that simplifies dependency management for developers; and publication of Docker images compatible with JDK 21 that ensures developers have access to the latest and most secure Java features. More details on these versions may be found in the release notes for Community Edition 6.2023.12 and Enterprise Edition 6.9.0.
Open Liberty
IBM has released version 24.0.0.1-beta of Open Liberty featuring support Jakarta Data 1.0-M2 specification which provides API updates to pagination and various improvements to the Javadoc and specification text. This release includes a test implementation of Jakarta Data that they use to experiment with proposed specification features so that developers can try out these features and provide feedback for the Jakarta Data 1.0 specification beyond milestone 2.
Quarkus
The release of Quarkus 3.6.4 provides resolutions to: a NullPointerException
observed in edge cases during a live reload by adding null
checks to the isRestartNeeded()
method defined in the TimestampSet
inner static class within the RuntimeUpdatesProcessor
class; an incorrect error reported when the OpenAPI key is not present by adding a Vert.x NoStackTraceException
class in the metrics output; and a NoClassDefFoundError
from the Java SequencedCollection interface with an application targeting Java 17, built with JDK 21 and running with Java 17. Further details on this release may be found in the changelog.
With Quarkus 3.2 defined as the current LTS release, Red Hat has published their release plans for upcoming minor releases of Quarkus 3.7, 3.8 and 3.9, currently scheduled for release at the end of January, February and March 2024, respectively. JDK 17 will be the minimal JDK version starting with Quarkus 3.7 and Quarkus 3.8 will be defined as the next LTS release. More details on the upcoming release of Quarkus 3.7 may be found in this InfoQ news story.
Helidon
The release of Helidon 4.0.2 ships with notable changes such as: an update to the web server’s internal state if a listener fails to start by ensuring that calls to the isRunning()
method defined in the WebServer
interface must return false
and the server isn’t listening for connections; a resolution to premature access to the RegistryFactory
class due to the JPA CDI extension running some start-up complete code before the metrics CDI extension had a chance to prepare Helidon MP metrics; and ensure that a supplier of the WsListener
interface is called exactly once per connection to resolve reuse of the supplier in request/response lifecycle. Further details on this release may be found in the release notes.
Similarly, Helidon 3.2.5 provides: dependency upgrades; fixes to some of the examples; and slight relaxation of a unit test to avoid test ordering issues. More details on this release may be found in the release notes.
Hibernate
The release of Hibernate Search 6.2.3.Final delivers notable changes such as: upgrade the -orm6
artifacts to Hibernate ORM 6.2.17.Final; compatibility with OpenSearch 2.11.0; and an adjustment to Hibernate Search’s Jandex index reading and building to work correctly with Spring Boot 3.2’s nested JARs. Further details on this release may be found in the release notes.
Grails
The Grails Foundation has provided full disclosure for CVE-2023-46131, a vulnerability in which a specially crafted Grails data binding web request can lead to a JVM crash or a denial of service. This CVE has been resolved in Grails versions 3.3.17, 4.1.3, 5.3.4 and 6.1.0.
The foundation has also released version 5.3.5 of the Grails Framework featuring: dependency upgrades; improvements to the release workflow; and change the resolve strategy from DELEGATE_FIRST
to OWNER_FIRST
due to the setProperty()
method defined in the BeanBuilder
class intercepting assignments, then discarding them if the currentBeanConfig
variable is null
. More details on this release may be found in the release notes.
Apache Software Foundation
The fourth alpha release of Apache Groovy 5.0.0 delivers bug fixes, dependency upgrades and new features/improvements such as: the addition of a getCodePoints()
method in the StringGroovyMethods
class to allow traditional Groovy conventions of using the codePoints
property; a reconsideration to implement an implication operator, ==>
, for scenarios where the operator aids readability or otherwise makes sense; and generation of bytecode for Groovy interfaces with default, private and static methods to replace defaults methods that are currently based on traits. Further details on this release may be found in the release notes.
Apache Groovy 4.0.17 has been released with dependency upgrades and resolutions to: a regression in version 4.0.16 related to static type checking with Groovy generics; the JsonSlurper
class parsing badly format JSON files without throwing an exception; and patterns conditionally created using the pattern operator, ~
, are cast to type String
or GString
instead of Pattern
. More details on this release may be found in the release notes.
Similarly, Apache Groovy 3.0.20 has also been released providing bug fixes, dependency upgrades and improvements such as: an enhancement to the coercion and implicit cast of map literals for the @CompileStatic
annotation; and a resolution to the static type checker not being able to infer List
or Map
types for a method return. Further details on this release may be found in the release notes.
The release of Apache Camel 4.3.0 ships with bug fixes, dependency upgrades and new features such as: a new Kamelet to support the Advanced Message Queuing Protocol; basic support for virtual threads (but doesn’t cover the replacement of synchronized blocks with reentrant locks nor the review of all thread locals); and support for start and end dates in the Camel Quartz component. More details on this release may be found in the release notes
Infinispan
The release of Infinispan 13.0.21.Final provides resolutions to: CVE-2023-4487, a process control vulnerability in which an attacker can insert malicious configuration files in the expected web server execution path to escalate privileges and gain full control of the Human Machine Interface software; CVE-2023-44487, a vulnerability in which Tomcat’s implementation of HTTP/2 was vulnerable to the rapid reset attack causing a denial of service that was typically manifested as an OutOfMemoryError
; and an availability check failure with an uncaught exception from the PersistenceManager
interface. Further details on this release may be found in the release notes.
Resilience4j
Version 2.2.0 of Resilience4j, a fault tolerance library for Java, has been released with bug fixes and these enhancements: support for Micronaut 4.0; and a framework agnostic bootstrapping of Resilience4j from Apache Commons configuration of properties for non-Spring Java applications. More details on Resilience4j may be found in this InfoQ news story.
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
Rheos Capital Works Inc. decreased its stake in shares of MongoDB, Inc. (NASDAQ:MDB – Free Report) by 35.7% during the third quarter, according to the company in its most recent disclosure with the SEC. The institutional investor owned 29,300 shares of the company’s stock after selling 16,300 shares during the quarter. Rheos Capital Works Inc.’s holdings in MongoDB were worth $10,134,000 at the end of the most recent reporting period.
Other institutional investors have also recently added to or reduced their stakes in the company. GPS Wealth Strategies Group LLC acquired a new stake in shares of MongoDB in the second quarter valued at $26,000. KB Financial Partners LLC acquired a new stake in shares of MongoDB in the second quarter valued at $27,000. Capital Advisors Ltd. LLC grew its stake in shares of MongoDB by 131.0% in the second quarter. Capital Advisors Ltd. LLC now owns 67 shares of the company’s stock valued at $28,000 after buying an additional 38 shares in the last quarter. Bessemer Group Inc. acquired a new position in MongoDB during the 4th quarter worth $29,000. Finally, Parkside Financial Bank & Trust grew its stake in MongoDB by 176.5% during the 2nd quarter. Parkside Financial Bank & Trust now owns 94 shares of the company’s stock worth $39,000 after purchasing an additional 60 shares in the last quarter. Hedge funds and other institutional investors own 88.89% of the company’s stock.
MongoDB Stock Down 0.6 %
Shares of MongoDB stock traded down $2.34 on Monday, hitting $407.48. 831,700 shares of the company traded hands, compared to its average volume of 1,627,651. The company has a quick ratio of 4.74, a current ratio of 4.74 and a debt-to-equity ratio of 1.18. MongoDB, Inc. has a 12-month low of $164.59 and a 12-month high of $442.84. The company has a 50-day moving average price of $382.80 and a 200 day moving average price of $379.65. The company has a market cap of $29.41 billion, a PE ratio of -154.35 and a beta of 1.19.
MongoDB (NASDAQ:MDB – Get Free Report) last released its quarterly earnings results on Tuesday, December 5th. The company reported $0.96 earnings per share (EPS) for the quarter, beating analysts’ consensus estimates of $0.51 by $0.45. MongoDB had a negative net margin of 11.70% and a negative return on equity of 20.64%. The company had revenue of $432.94 million during the quarter, compared to analysts’ expectations of $406.33 million. During the same period in the prior year, the company earned ($1.23) earnings per share. The business’s quarterly revenue was up 29.8% compared to the same quarter last year. Sell-side analysts predict that MongoDB, Inc. will post -1.64 earnings per share for the current year.
Analyst Ratings Changes
MDB has been the subject of several analyst reports. Scotiabank began coverage on MongoDB in a research note on Tuesday, October 10th. They set a “sector perform” rating and a $335.00 target price for the company. Canaccord Genuity Group increased their target price on MongoDB from $410.00 to $450.00 and gave the stock a “buy” rating in a research note on Tuesday, September 5th. Citigroup raised their price objective on MongoDB from $430.00 to $455.00 and gave the company a “buy” rating in a research note on Monday, August 28th. Sanford C. Bernstein raised their price objective on MongoDB from $424.00 to $471.00 in a report on Sunday, September 3rd. Finally, TheStreet upgraded MongoDB from a “d+” rating to a “c-” rating in a report on Friday, December 1st. One investment analyst has rated the stock with a sell rating, two have given a hold rating and twenty-two have issued a buy rating to the company. According to data from MarketBeat.com, the company has an average rating of “Moderate Buy” and a consensus price target of $432.44.
Check Out Our Latest Report on MDB
Insider Buying and Selling at MongoDB
In other MongoDB news, CAO Thomas Bull sold 518 shares of the company’s stock in a transaction dated Monday, October 2nd. The shares were sold at an average price of $342.41, for a total transaction of $177,368.38. Following the completion of the sale, the chief accounting officer now directly owns 16,672 shares in the company, valued at $5,708,659.52. The transaction was disclosed in a filing with the Securities & Exchange Commission, which can be accessed through this hyperlink. In other news, CAO Thomas Bull sold 518 shares of the stock in a transaction that occurred on Monday, October 2nd. The shares were sold at an average price of $342.41, for a total transaction of $177,368.38. Following the sale, the chief accounting officer now directly owns 16,672 shares of the company’s stock, valued at $5,708,659.52. The transaction was disclosed in a legal filing with the Securities & Exchange Commission, which is available through the SEC website. Also, CFO Michael Lawrence Gordon sold 7,577 shares of the stock in a transaction that occurred on Monday, November 27th. The shares were sold at an average price of $410.03, for a total transaction of $3,106,797.31. Following the sale, the chief financial officer now directly owns 89,027 shares in the company, valued at $36,503,740.81. The disclosure for this sale can be found here. Insiders sold 298,337 shares of company stock worth $106,126,741 over the last ninety days. Company insiders own 4.80% of the company’s stock.
About MongoDB
MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
See Also
<img src="https://mobilemonitoringsolutions.com/wp-content/uploads/2021/12/SECFilingChart.ashx" alt="Institutional Ownership by Quarter for MongoDB NASDAQ: MDB” width=”650″ height=”350″ loading=”lazy”>
This instant news alert was generated by narrative science technology and financial data from MarketBeat in order to provide readers with the fastest and most accurate reporting. This story was reviewed by MarketBeat’s editorial team prior to publication. Please send any questions or comments about this story to contact@marketbeat.com.
Before you consider MongoDB, you’ll want to hear this.
MarketBeat keeps track of Wall Street’s top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis. MarketBeat has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on… and MongoDB wasn’t on the list.
While MongoDB currently has a “Moderate Buy” rating among analysts, top-rated analysts believe these five stocks are better buys.
Click the link below and we’ll send you MarketBeat’s guide to investing in 5G and which 5G stocks show the most promise.
Article originally posted on mongodb google news. Visit mongodb google news
MMS • Thomas Betts Daniel Bryant Vasco Veloso Eran Stiller Tanmay
Article originally posted on InfoQ. Visit InfoQ
Subscribe on:
Transcript
Introduction [00:17]
Thomas Betts: Hello and welcome to The InfoQ Podcast. I’m Thomas Betts, and today, members of the InfoQ editorial staff and I will be discussing the current trends in software architecture and design. I’ll let each of them introduce themselves and we’ll start with Eran Stiller.
Eran Stiller: Hi, so everyone, my name is Eran Stiller. I’m the Principal Software Architect for Luxury Escapes. I’m also an editor on architecture and design in InfoQ, I’ve been doing it for several years now and I’m enjoying every minute. I can’t wait for this discussion, which is always very interesting.
Thomas Betts: Great. We’ll move over to Vasco.
Vasco Veloso: Hello, thank you for listening. My name is Vasco Veloso. I am currently working for ING Netherlands on a role connected to software design and architecture with a few interests along the way, but an all-encompassing and generic role at the moment.
Thomas Betts: I think we can all say that. Moving on to Tanmay.
Tanmay Desphande: Hello, everyone. This is my very first time on this podcast, I’ve been working as Operations Data Platform Architect for SLB. My main work goes towards architecting systems for production optimizations and well constructions in the oil and gas domain, using the latest technology that we are using and while you’re also making sure that we are reducing carbon or using the carbon-neutral technologies to do so. Pretty excited to be here.
Thomas Betts: We’ll wrap up with Daniel Bryant.
Daniel Bryant: Hey, everyone. Good to see everyone again. Daniel Bryant here. I run DevRel at Ambassador Labs, but a long career in software architecture and I do moonlight as a Solution Architect at Ambassador Labs, as well. You may recognize the voice. I do a few podcasts alongside Thomas and also do the news management here at InfoQ, too, so looking forward to this discussion.
Thomas Betts: So as I said today we’ll be discussing the current trends in architecture and design as part of the process to create our annual Trends Report for InfoQ. Those reports provide the InfoQ readers with a high level overview of the topics to pay attention to and also help us, the editorial team, focus on innovative technologies for our reporting. In addition to the report and the trends graph that are available on infoq.com, this podcast is a chance to hear our conversation and some of the stories that our expert practitioners have observed.
Design for Portability [02:21]
Thomas Betts: I think we want to start the conversation today with design for portability and the concept of cloud-bound applications, which has been gaining adoption. One of the popular frameworks for this is Dapr, that’s D-A-P-R, not confused with dapper, D-A-P-P-E-R, the Distributed Application Runtime, but that’s just one implementation of this idea of designing for portability.
Vasco, can you kick us off? What do you think of the idea of designing for portability and how’s it being used today?
Vasco Veloso: Well, from what I have seen, most of the usages of these tools and these patterns, they are going back, if I may say so, to the old paradigm of build once, run everywhere that we have been hearing about for a long time. Except that this time, instead of being on different machines, the everywhere is, well, every cloud, still chasing the vendor independence dream and it is quite good to see that there are choices of frameworks to accomplish that goal.
Personally, I believe that when the framework is good and has quite a reasonable amount of choices of platforms, it also means that whoever is designing and building the system can focus on what brings value, instead of having to worry too much with the platform details that they are going to be running on.
Server-Side WebAssembly [03:44]
Thomas Betts: Then one of the other ideas that falls into this bucket sometimes talked about is WebAssembly and there’s also the clarification of client-side WebAssembly versus server-side WebAssembly.
I know, Eran, this is saying you called out earlier in our pre-discussion. Can you elaborate on that? What’s the server-side WebAssembly integration with or thinking about for design for portability?
Eran Stiller: A lot of time when we think about WebAssembly, as we’ve said, it’s about running things in the browser, it’s in the name “Web” Assembly.
But I think a large benefit of it is running it on the server side because there’s an ongoing effort for integrating a WebAssembly-based runtime, instead of let’s say Docker. Let’s say you want to run in Kubernetes or another orchestrator, you want to run your software, then instead of compiling it to a Docker container and then need to spin up an entire system inside that container on your orchestrator, you compile to WebAssembly and that allows the container to be much more lightweight because it has security baked in because it’s meant to run the browser and it can run anywhere in any cloud, on any CPU, for that matter. So it doesn’t matter if it’s an ARM CPU or an x86 CPU and basically just abstracts away all the details. So we don’t really need to care about them. It makes it everything work more lightweight and in a more performant manner. I think that’s the key benefit there that we’ll see now as we progress.
Thomas Betts: Daniel, I can see you nodding your head along. You have anything to add to that?
Daniel Bryant: Yes, it totally makes sense. As you mentioned Eran, I can see the Docker folks leaning heavily into this as well, right? They’ve totally recognized exactly this.
I think there’s a lot of interesting connections around security. Because as Vasco mentioned, I’m old enough to remember Java. “Write once, debug everywhere,” was the phrase I think we often used to say, right?
So I really as the abstractions are going up through the chain, JVM to you implied there, Vasco, vascular really were at the cloud abstraction layer now. I think things like Dapr combining with Wasm, it’s all about finding the right abstractions where we can get that reusability, that high cohesion, low coupling from whatever level you are operating at.
So many of us architects are more thinking about the Dapr- level stuff, but I think the Wasm stuff really has an impact. You hinted at it, Eran, on security, for example, if I choose to run a super lightweight container, I’m sure many of us are using Go and Scratch containers, same deal. It gets rid of a whole attack vector potentially. I know we’re going to look at software supply chain later on, but really I love the ideas that the different levels of abstraction here.
Thomas Betts: I can’t remember who said it one time, “Every problem being solved with one more layer of abstraction and indirection.”
Daniel Bryant: Totally.
Thomas Betts: But the idea behind that joke is, “I want to reduce the cognitive load. I don’t want to have to think about the whole tech stack. How do I get closer to thinking about my business problem, my business logic, my domain?” If I have to think about, “How do I deploy this in Kubernetes?” all the time, that’s one more thing that my developers have to think about.
I think Dapr is one framework that says, “Hey, we’re going to take care of all of the deployment problems. We know the shapes of the things we want to deploy, the things we want to build with. We’ve now given you the right size Lego bricks to build what you need.”
Server-Driven UI [06:48]
Thomas Betts: Tanmay, you had an idea that I wanted to bring up that’s, again, somewhat related, server-driven UI. Can you tell us more about what you think that is?
Tanmay Desphande: It’s a concept that people started talking about a couple of years back but then didn’t really understand where to use those correctly. Any public applications that we see, any mobile-native applications that all of us need to develop, has to go through quite a lot of, let’s say, scrutinies when you get published on the app stores and the Google stores, you need to continuously deploy. When you’re an era of, let’s say, every day 100 production check-ins that you’re making and continuous deployments, it’s very hard to see that your code is not getting deployed unless somebody from Google and Apple is going to approve your applications, et cetera.
So I think that’s where the benefit of server-driven UIs are getting popular among quite lot of cloud-native or, let’s say, mobile native application developers, where they want to do server-driven UI development for native applications so that they can continue to improve their applications, not even bothering about if all of my users are on the latest versions or do I need to continue to keep the backups in general for the backward compatibilities, as well. So that’s where I see this trend going to get kicked in again and see a lot of those people will continue to follow on that route, as well.
Thomas Betts: It’s some of that cyclical nature of software that we tend to see these patterns going back and forth. We had mainframes that were very client-server and they’re like, “Oh, well now we can have the client be smarter. We can put the code over there,” and everything moves over.
Then we got into the web era and it’s like, “Oh, the website does everything and we just serve up simple HTML.” Then HTML had all this JavaScript and all the app runs in the browser.
So we keep going back and forth trying to find the right balance and now we have a lot of options and people can choose based on the latest technology, what’s the right solution for you.
Tanmay Desphande: The very first time I mentioned this to somebody, it’s like, “Isn’t that something JSP used to do?”
Daniel Bryant: Nice. It’s that cliché of, “History doesn’t repeat, but it rhymes.” But I think in software development it’s poetic, you just constantly see the same thing over and over again and hopefully we make improvements, to your point, Thomas.
Thomas Betts: Hopefully.
Eran Stiller: Just something to clarify that. When we say design for portability, it’s not necessarily about, as Daniel said, “Writing once, running everywhere” because we were there before with Java .NET and whatever.
It’s more about the fact that all the details are abstracted, you don’t necessarily need to care about them. Because let’s say someone mentioned vendor lock in want to avoid vendor lock-in. It’s not about the fact that tomorrow we can take our entire application, move it on another cloud, it will still work because it won’t. There are always details that are abstracted away. If we change the database, we change the platform, something will break, something will need to change.
It’s more about the fact that we don’t care about those details as we develop things. It makes it much easier for us as developers and as architects when we design our systems, and then later on once the need arises and we want to change something, we’ll still need to work but then it’ll be easier. But I think the focus is less about that and more about the abstraction side of things.
Thomas Betts: Yes, I think portability is, like you said not the, “I need to be able to move it,” but it’s, “I don’t have to think about how it’s anchored to the ground. My code is directly tied to this.” And if you see the evolution of… I used to write build scripts and it was an idea of let’s have our infrastructure as code. Well, now we’re a few layers abstracted from the infrastructure as code. I don’t need to talk about deploy this VM. I just say, “I need some compute that does this,” and we think about compute as the unit that I need or, “I need some data storage,” just stick my data somewhere and handle it and I’m further removed. And whatever’s underneath that could vary. Like you said, if I deployed the same thing to AWS or Azure, maybe the underlying stuff is the same but my abstraction looks identical.
Tanmay Desphande: I just wanted to add a different perspective in that portability section, as well. I mean, the industry that I come into the picture, I mean quite a lot of companies which are, let’s say, national companies in general, they’re very particular about where their data resides and the Googles and the Microsofts of the world are not available in every single region itself. So building applications which are portable to run in every single data center, whether they’re GCPs or the Azures or the AWS of the world, it’s quite important, as well. Same goes for the WebAssembly area, as well. Imagine, let’s say, all sort of application, the Adobes and the Siemens of the world, when they used to build heavy desktop applications and with the cloud, they had no answer about, “How do I provide this heavy desktop applications as a service to my customers?” Web-based assembly has been now coming to be a really true place in the sense so that people are able to stream heavy applications via process itself, as well, for sure.
Daniel Bryant: Can I just put a pin, as well. I’ve hear a few folks say that, Eran, Tanmay, there. It’s really all about the APIs. As in, cause if we anchor ourself to a vendor-specific API, it’s really hard to move away. I think that’s the secret of Dapr, if I’m being honest. It’s all about the APIs and they abstract without doing the lowest common denominator. I think that’s one of the dangers, you always code to, “It’s got to be MySQL” or whatever, but I can’t take advantage of Cloud spanner or Redshift or whatever kind of thing. I think the APIs are the key and that’s where I can see Dapr, if we can all collect around this CNCF project and go, “Hey, let’s all agree,” exactly what Eran said we’re building for the future. It’s still painful to change but it’s not impossible to change. Right?
Tanmay Desphande: Absolutely, I agree.
Large Language Models [11:53]
Thomas Betts: So I want to move us on because otherwise we’ll be talking about this for the entire afternoon.
Large language models is I think the next thing I want to talk about. It seems lately there’s a new product announcement–I think there were three this morning. Well, here’s a new thing that just came out. GPT-3 is now old and GPT-4 is out. We’ve got ChatGPT, Bing, Bard. Who’s going to be the best chat bot to talk to?
If you look at it, the people who are in the know have seen that we’ve been building to this slowly for years, but it seems like it’s just suddenly got upon us in the last few months.
Are we at some major inflection point? Is this all just hype or is there something there that we really need to consider? That’s my hot take. Who wants to take that and run with it?
Tanmay Desphande: I think I’m pretty excited about the advancement of all those applications from my personal space, but I’m equally worried about my enterprise architect hat, as well. Because I’m not sure in terms of what sort of data is being used to train those models, et cetera. When I’m using it in my personal space, I’m very happy to use those applications. But when I start wearing my enterprise architect hat, I’m equally worried about what are the challenges it’s going to bring to my enterprise if some of my developers are going to use ChatGPT to build applications and deploy that, as well. So that’s where I’m now very excited to see how this evolves, as well, for sure.
Eran Stiller: Yes, I think we’re seeing a revolution at the moment. Because while the ideas are not new, so GPT-3 has been around for a while and it’s been used in various places, but we’ve seen lately we’ve seen GPT-3.5 with the ChatGPT and now GPT-4 and there are other models around, it’s not only one. But I think we’re seeing a large, major improvements happening all the time and the speed and velocity at which things happening just keeps getting faster. I think we’re at a point where the model is good enough, it’s not perfect, there are always issues, but it’s good enough to employ to various scenarios that we never fought of using it for before.
So for example, I see people, the company where I work, some of them like to hack things and they use ChatGPT. They actually took one of our React components and they took our coding conventions at the company and they fed into ChatGPT or GPT-4. They fed it with the coding convention, they fed it with the code for the React component, go, “What are the bugs here, what are we doing wrong?”
It actually found things and it’s amazing. It actually found things that the developers never thought of. So that’s only one way to try neutral, utilize this new thing that we’re still don’t know what we’re going to do with, exactly. But I think the possibilities are endless and I, for one, am very excited for all the new things we’re going to see around and all the new APIs that were laid on top of ChatGPT. Because ChatGPT and GPT-4, they’re very broad, you can just input some text, get some text back. But I think the innovation here, once it will be integrated as part of other systems using the API layers and we’ll adapt it to specific fields, that’s when I think we’ll see even more innovation coming.
Vasco Veloso: I was hearing you speaking and also looking at the amount of products that have appeared in the past couple of weeks. It almost feels like we got ChatGPT and then everybody else who was working on large language models looked at it and said, “Oh my God, we need to productize this, we are going to miss the train.” And that’s actually good because, in a sense, it feels like that we may be looking at the start of a new AI summer or AI spring maybe, where the pressure of getting a product out there may actually produce something quite useful, and everybody’s trying to see what the model can do. Well, I am indeed looking forward to what’s coming out of it.
Daniel Bryant: Perfectly said, and I’m a big fan of Simon Wardley, so Wardley Mapping, if folks have bumped into it. He talks about punctuated equilibrium, like a big disrupter, paradigm shift, exactly what you’ve all mentioned.
I like what you saying, Vasco. One thing I’m hearing the productization, a lot of it’s around the UX and I’m thinking back to, we mentioned Docker earlier on. Containers were not new, but Docker put a fantastic UX on them and a centralized repository, Docker Hub, that was a game changer.
I’m seeing just to your point, Thomas, this morning there was a news drop on, I think it was the GitHub what they’re doing and there’s the voice control with ChatGPT and so forth. The UX is what it’s about. We as developers would love to be able to chat to our pair program out or pair architect and then get that feedback.
But there’s always that thing you mentioned Tanmay of, “I want feedback and input, but I want to make the choices.” Because I remember when RPA, was it robotic process automation, came out, UiPath, all that stuff. I remember going to a conference, an O’Reilly Software Architecture conference and everyone was super nervous, quite rightly, about this. Because they were like, “My finance team is creating this RPA thing. I’ve got no idea what security they put on it! I’ve got no idea what auth they’re using!” I’m totally seeing that could be a potential with the output of these kind of frameworks now, right?
Thomas Betts: Yes. I think you start opening it up from … Professional developers are not always thinking about security, but the citizen developers really don’t think about it because they assume it’s just baked in, because everyone picks up their iPhone and they think it’s as secure as it needs to be and they don’t think about, “What are the ramifications if I connect to this site and connect to that site? It’s like it’s all fine, right?” Then there’s a breach and you find out, “Oh, this could have been maintained.”
When we lower the bar of who can write the code, and I think that’s good in a lot of ways, we have to acknowledge we have to build better security by default and not so much that it prevents its use.
Low-Code and No-Code Integration [17:21]
Thomas Betts: That does get to the idea of, how is this one more way that, I said citizen developers, would use it and how does it integrate with low-code and no-code. I think, Eran, this was your idea, you wanted to tie in.
Eran Stiller: Yes, so there’s a lot of talk about low-code and no-code systems for a while. Seems for years it was on the InfoQ Trends Report. I don’t remember when we added it, but it was there before. I think that all of these language models are going to be a huge enabler for low-code/no-code systems.
I remember a few days ago I saw on Twitter, I don’t remember who posted it, I saw an example of someone who integrated ChatGPT with the Unity platform and they had this prompt where you can say, “Add 100 cubes. Add 3 resources of light.” It just did because behind the scenes it took the prompts, it translated them into code, and then you just ran them.
We all saw examples of how ChatGPT and GPT-4 can actually create usable code. ChatGPT could only create components, one file, et cetera, simple things. GPT-4 is much more improved. You can actually generate entire simple websites, but still. I think once we take that, this could actually be a new abstraction of programming languages. Because we started from Assembly and then we got C and C++ and then we got into all kinds of Java and .NET and higher level languages.
I think you think about this as a new programming language, which is much more abstract and I don’t know if you can do anything with it, maybe not now, maybe it’ll come in the future, but I think it’s inevitable that as citizen developers but also as professional developers, much more efficient and it’s another tool that we can use and we should learn how to utilize it.
Thomas Betts: Yes, I think the example someone cited to me is a lot of people’s first programming language is Microsoft Excel. Maybe they don’t they don’t think about it-
Vasco Veloso: And it’s Turing Complete.
Thomas Betts: But when you’re saying, “This cell equals this plus that” or, “Sum up these numbers,” you’re writing code. It’s not just type in text, it is actual code there and you don’t think about it. You don’t think about it as programming, but in a sense, that’s what you’re doing.
I see these large language models as that enabler that gives ordinary people the ability to do a lot more, without having to know how it all works. That’s, again, that force multiplier of having an abstraction layer that’s so powerful.
I think someone pointed out it can create a full website. I saw the demo of, let me sketch out the website, take a picture, and then it generates the HTML for you.
Vasco Veloso: A wireframe, yes.
Thomas Betts: That goes to the, it’s the UX that we need to figure out. If I can take a picture of something and get a working system or I can talk to it, as opposed to I’m sitting there for an hour typing out code, that just saves me time. It’s not doing anything that I couldn’t do or some programmer couldn’t do. It’s doing the thing that’s really easy so I can work on the thing that’s really hard.
Eran Stiller: I think also a lot of people think, “Well, will this get developers out of work? Will we need any more developers?” I think that’s not the case. I think we’ll still need developers, they’ll just be more efficient and do more things. Because I think Vasco mentioned earlier, we’re still the one making the decisions. For example, all these model, like GPT-3 I think was already integrated in GitHub Co-Pilot for example, which I think was based on top of it and could generate test cases. I assume at some point it will be updated to GPT-4 and will provide better results.
But still, even when GPT-4 other models they generate code, you could still look at the code quality, it’s not what we expect in quality conventions. Maybe there’s some hidden bugs in there that we don’t know about, maybe some security flaws. Of course, with time it’ll get better, it will give out better results and we could go on faster, but I think we’ll still be the drivers. It’s just that the building blocks will be much bigger.
Daniel Bryant: That’s an interesting point, Eran. It does point to different skillsets we might need to learn as developers or architects. Because I think more product-minded developers will excel in this; to your point, Thomas, I sketch out a wire frame, happy days. For some folks, really like writing code, sometimes I want to write the code. So that’s going to be really interesting, that thing we have to learn, and as architects that the way we phrase the problems like typing is no longer the bottleneck by ourselves, by our teams. How’s that going to change our jobs? Quite a lot I think.
Thomas Betts: Yes. I think it’s going to put Stack Overflow out of business before it kicks me out.
Daniel Bryant: Yes, super interesting.
Thomas Betts: But Stack Overflow is probably feeding half of what my questions are answered by, it’s just saving me the work of finding the answer.
Software Supply Chain Security [21:48]
Thomas Betts: I wanted to move us on, again. We don’t talk about security a lot on the Trends Report, but I wanted to bring it up this year because it’s been an interesting last few years with global supply chains being disrupted and the talk of the software supply chain and how that comes into play. We’re not moving molecules around, it’s just electrons. But those bits that we’re downloading for the hundreds or thousands of packages that my code depends on, we’re now getting into this question of trust.
How do I know what the code is that I’m using? Are we just trusting the commons that we’ve got? “Oh, it’s out on NPM or NuGet, so it’s got to be safe?” And how do we verify that the code I run is safe for what I need it to do?
Tanmay Desphande: Every time anyone start talking about the software supply chain security, a meme pops up into my head where there’s a bunch of great things that I’ve developed and it says that it’s based on a very tiny bit of JavaScript that somebody’s maintaining in some corner of the world that I’m not aware about. So I think that says a lot about the supply chain software security.
Starting from 2021, we started seeing quite a lot of buzz around that because there was some incidents that proved us to make an conscious effort in that direction. I’ve seen quite a lot of, let’s say, open frameworks that are available from Googles and the Microsofts of the world, where they’re making it available for everyone so that everyone start understanding what level of software supply chain security that you are into and what else can be done, as well. So I think it’s going to be quite evident as we continue to move ahead in this journey to keep stressing more and more on that, for sure, as well.
Thomas Betts: It’s certainly something we might not be thinking about every day, but there needs to be something that the architects looking at the big picture of the system have to consider, “What is the foundation that we’re building on? Is it all just a house of cards to level down?”
Daniel Bryant: Getting visibility is a key thing, as Thomas was saying. I think a lot of us here work in, say SBOMs, that kind of stuff. First thing is actually knowing where there is a problem, and I think the Log4J stuff really triggered a lot of architects to realize, “This is everywhere in my system. I didn’t even realize there was Java running there.” Do you know what I mean? So having that SBOM and you mentioned that SLSA, Tanmay, as a bunch of frameworks popping up, open source frameworks. I think they’re super smart. I’d definitely encourage architects to check these things out. Visibility is the first stage to their problem solving.
Design for Sustainability [23:58]
Thomas Betts: What about the idea of designing for sustainability? Vasco, I think you mentioned this a little bit. Are there new trends or new ways that people are thinking about making their software system sustainable or measuring it?
Vasco Veloso: Indeed, I have noticed that there is more talk about sustainability these days. Let’s be honest that probably half of it is because, well, energy is just more expensive and everybody wants to reduce OPEX. But, it also means that there are new challenges.
Let’s look at a data center, for example. I am absolutely not an expert in data centers. I know they exist, I can understand them, but please don’t ask me to optimize their footprint, way out of my league. However, a data center is just at a level of abstraction that is just too high and there are limits to what can be done to reduce the footprint at that level. Initiatives such as the Green Software Foundation, for example, that is part of the Linux Foundation, they are trying to come up with a methodology that people, developers, architects, can use to measure the footprint of a software system. And that, depending on the boundaries that you choose, can actually allow you to measure the footprint of individual systems within a large boundary, such as a data center. Going from there, well, the sky’s a limit, I think.
Thomas Betts: Yes, I think the measurement problem is the thing that we’re working on most right now. Because we say we want to reduce our carbon footprint, “What’s my carbon footprint?” And if you don’t have a way to tell, the best guess we’ve had is, “Well, let me look at what my cloud spend cost because that’s probably directly correlated to how much the servers are costing to run, which is based off the electricity and let’s assume the electricity is not green.” But that’s going to be wrong in a lot of cases. And it’s going to be very different. If I deploy to US East, which runs mostly on coal, that’s going to be different than a data center that runs on renewable energy, so you have to factor that in. I think that’s what the Green Software Foundation is trying to help do is not just, what is your code doing, how much does it run? Where is it and how is it run?
Vasco Veloso: Indeed. Also taking into consideration that the energy is cleaner at some points during the day and is dirtier at others, as well. So that is still yet another factor.
Thomas Betts: So you can change when your software runs. Tanmay, you had something to add?
Tanmay Desphande: Well, yes, and I can relate more to that. I mean, I breathe and breathe out all of those things every day as a part of my job. So for sure, I feel that the way we are talking about the software develop material, we also going to start and seeing every single software provider is expected to provide their carbon emissions as a part of the services that they’re providing, the way we provide liabilities, the we provide availabilities, et cetera. I feel that that’s going to be a trend that we’re going to see in next few years for sure, as well. Then what’s going to drive that trend is the way that we do today our financial accounting as a public-listed company, we’re expected to do the carbon accountings, as well, sooner. So it’s going to be quite evident that I may start choosing a vendor that does a greener energy emissions than the other one in some context, as well, for sure. So it’s going to be quite an interesting trend to watch our product, for sure.
Thomas Betts: I know a lot of companies, at least in the US, they said, “Oh, we’ve gone carbon-neutral.” Well, they just waited until 2020 happened and they sent everybody home and they stopped having to pay for the electricity in their building. I’m like, “Are you counting everyone running at home and everyone’s internet connection to connect back into the data centers?”
Eran, did you have a last comment about this?
Eran Stiller: So you mentioned earlier where the software is going to run and how it’s going to operate. But it’s also about when and how we time things.
For example, today, and I think the key here is accountability, I think we start going to that. Because right now as developers we have FinOps running all the time and it’s getting more traction. We’re actually being measured on how much money we spend on the cloud because a developer it’s his decision, her decision, they can just spin up a large instance and create $10,000 bill out of nowhere.
But no one’s measuring our emissions right now and there isn’t really a good way of doing it. Also, I know the large providers are working on all kinds of calculators to help you estimate your carbon footprint, but it’s still not there and no one’s holding us accountable for it. I think once we get to that stage where we’ll be held accountable both as developers, but more importantly as architects, then we need to take those decisions into account.
Because right now, as an architect I design a system, I’m well aware of how much containers am I using, am I using this database or that database, because of cost. But when we factor in carbon emission and those calculations, and I might decide, “Well, I can run this calculation at night,” not because it’s cheaper, it might be cheaper because things like spot instances and stuff, but it also produces less emissions because of the way the power for that data center is generated and so on.
So I think right now we’re at the start of it. It’s still an innovator’s market, but I think it’ll progress as accountability comes to mind, when the calculators become better, and it might even become a regulation thing, but who knows.
Decentralized Apps (dApps) [29:16]
Thomas Betts: So I’m going to throw out a topic that I didn’t mention earlier that I was going to bring up: Blockchain and decentralized apps. Blockchain has sat on the innovator section of the trends graph I think since it showed up and we haven’t moved it because it is only applied to crypto and NFTs and that just didn’t seem interesting.
Well, partially because Twitter was supposed to implode this year, Mastodon took off, and I think that’s maybe the first time that people saw an actual decentralized app in production and they used it. Is that enough to say it’s moved over to early adopter or is blockchain and dApp still one of those innovator hasn’t really caught on stages?
Daniel Bryant: Shall I be controversial and say I still think it’s niche or niche, depending on how you want to phrase it? I think there is something interesting there, but there’s a lot of like, “That’s almost like a house of cards built on top of it,” in terms of it’s very easy. We saw most folks, I think Meta pulling back from NFTs. There’s been all the SVB bank and a lot of crypto stuff related to that, as well. So I think there’s distrust in the market.
When we have the zero interest rates, I think everyone was just like, “Build all the things!” and clearly not think about carbon emissions and related to that, as well. But now I think the fact that we do have high interest rates and we are being more conscious of some of these environmental factors, I don’t see blockchains. To do with proof of work, I don’t see it being a thing, proof of stake maybe or proof of some other things. But the way it’s originally architected, I don’t see that happening in the current macroeconomic climate.
Eran Stiller: Yes, I think when we prepared for this podcast, someone mentioned that it just doesn’t align with consumer demands or consumer requirements. You mentioned Mastodon earlier.
It’s decentralized and as a techie, as a technologist, I really like it. I think it’s cool. I think the way a server shuts down, you can just move your stuff over and continues working and no one can shut it down and there’s no censorship in it. And these are very good ideas. But when you look at the average consumer, they really don’t care about it. “I just want it to work. I want to go to a certain website, to type something in the browser, open an account, log in. I don’t care if there’s censorship or there isn’t censorship,” most people. Again, some people who live in certain countries care about this very much, but I’m talking about the most of the population that uses this tech, and decentralized software is still not there, it still doesn’t offer …
I think the only case that I can think of that a decentralized app was actually successful was around file sharing and torrents and stuff like that. The reason why that was successful because the consumer requirement actually aligns with the nature of decentralized app, “I don’t want to get caught. I don’t want anyone to be able to block me and I wanted it work fast,” so decentralized was doing it much better. So the requirements aligned with the technology there, but I don’t think there is another case of that that was very successful.
Thomas Betts: I think you hit it on the head. There’s no consumer need for that as a top level feature and so why add the complexity if we don’t need it?
Socio-Technical Architecture Trends [32:12]
Thomas Betts: So I wanted to wrap up the discussion talking about the socio-technical issues, the ways that architects are working. I think we took it off the trend chart last year, but the “architect as technical leader,” the “What role do you have as an architect?” How do you lead, and also, how are we doing with remote and hybrid work, all of that stuff about how are we doing architecture.
Just general thoughts that people have on that concept. Vasco, you want to kick that off?
Vasco Veloso: Well, I can get started by sharing my personal view, which is that regardless of whether we are designing a piece of software or an enterprise architecture at a 30,000 feet view, it is always important to, and this sounds like a cliche, but not to lose touch with reality. That is when expressions such as having one foot in the engine room or always messing around with tech, even if you don’t write code or build a system on your free time, just ensuring that there are still lines of communication open with the people who are actually building and troubleshooting and debugging and involved in those calls at 3:00 in the morning. That is the only way how we can really understand whether what we are designing works and has value and then take those lessons to the next project. That’s my take.
Thomas Betts: Tanmay?
Tanmay Desphande: I like the idea always about documenting your architecture. I mean, if you just Google around the word, I mean, “How do I document an architecture?,” there’s not really good answers or a very strict or very popular answers to that. Then I always get that question with some junior people that I’m currently working with.
There are quite a lot of good things available. I mean people generally start talking about ADRs, but then they only record the decisions part. I’m not going to give the full view of architecture how it is there right now. So I’m right now struggling to have a very good standard around that part. The way I personally try to use it is obviously with a combination of C4, ADR, and something like r42s is the way for me, it is the correct answer so far for me right now. But then I certainly feel that there has to be something revolutionary here to happen, considering the fact that software architecture is such an old in a sense, that age-old practice that we have been practicing all across and there’s still no good answer to document a very good soft architecture in a sense, right?
Daniel Bryant: Can I just add, great answers already, but I think to lead, coming almost full circle with LLMs and ChatGPT, the what now is going to be easier. We can feed something into our systems and go, “What’s going on there?,” and it’ll spit back and we can have a conversation.
As you alluded to Tanmay, the why is not going to be there, and that’s ADRs. Thomas, you’ve talked fantastic on previous podcasts around the value of ADRs.
But I almost envision talking about UX again, but the ability to scrub the timeline. Do you know what I mean? I do it sometimes in Google Docs and things and, “Why did we get to this point?” I can go back in the version history, Git, Google Docs, choose your poison. But I think the why is going to be the real thing. The what, ChatGPT is just going to rock that better than we can. Feed the code in, give us an answer. But why? Because there’s always good intentions, good information at the right time, but then imperfect world, and you’re like, “What idiot put this code there? Oh, it was me two years ago,” is the classic. I love to be able to scrub that timeline and go, “Why did I do it at the time? Oh, these constraints were there, which have changed. Now I can make different decisions,” right?
Eran Stiller: Daniel, you’ve just blown my mind. As you were talking, I started thinking, “Well, if we feed ChatGPT, we give it our code and we also provided our ADRs along with it and they’ll all be timed and it will have access to the Git commit history, I can ask some complex questions around why we’ve done some stuff and who’s at fault at something.” That’s an interesting concept, I wonder who’s going to turn into a product.
Daniel Bryant: We need to patent this, Eran. I think this is the five of us here could make some serious money, right?
Eran Stiller: Yes, indeed.
Thomas Betts: I want the GPT-enabled forensic auditor that comes in and says, “What did it look like at this time and why did you do it that way on December of 2022?” I don’t remember, but all the data should be there if you had it captured.
I personally have found that if I am writing an ADR, using ChatGPT or Bing to ask it the questions help me understand the trade-offs. That’s a surprising thing that a year ago I would not have considered having that as my virtual assistant, my pair programming assistant.
I work in a different time zone than most of my team so they’re not always on when I’m working late and having that person that’s always available to ask a question. Then if I get lazy, “Please just write the ADR for me” and then I compare it to I would’ve done. So that’s a new way of working that I think gets to, we have to constantly be looking as architects of what the new technology trends are, how can we incorporate them, should we incorporate them, and how can we make our process better? And how can we get that to our developers and engineers and everyone else we work with and make their lives better?
Tanmay Desphande: I think in the old remote world, I mean, we worried about asynchronous communication is the key. ADRs and C4s and all of those things that we keep talking about are the best means to communicate your architecture if you’re working in a remote setup, for sure, as well. These are the tools probably all to create personal communications, as well.
Thomas Betts: I want to thank everybody for joining me and participating in this discussion of architecture and design trends for the InfoQ Trends Report. We hope you all enjoyed this and please go to the infoq.com site and download the report that’ll be available the same day this podcast comes out.
So thank you again, Vasco Veloso, Eran Stiller, Tanmay Deshpande, and Daniel Bryant.
I hope you join us again soon for another episode of the InfoQ podcast.
.
From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
An open-source community which is popular amongst threat researchers, VX-Underground has revealed that hackers have tried to steal 900GB of Ubisoft’s data. The community shared a post on its official X account to reveal that the game maker’s internal services were compromised in this data breach.
According to a report by Engadget, hackers also attempted to steal the user data of the popular online tactical shooter video game Rainbow Six Siege.The report claims that Ubisoft spotted the breach 48 hours later and was able to revoke the hackers’ access before they were able to exfiltrate the data.
In a statement to BleepingComputer, Ubisoft said, “We are aware of an alleged data security incident and are currently investigating. We don’t have more to share at this time.”
How hackers targeted Ubisoft
VX-Underground posted retrieved screenshots which were shared by the attacker. These screenshots allegedly showed that they accessed Ubisoft’s internal software and developer tools like the company’s Microsoft Teams conversations, SharePoint server, Confluence and MongoDB Atlas.
In a post on X, VX-Underground wrote: “The threat actor would not share how they got initial access. Upon entry they audited the users’ access rights and spent time thoroughly reviewing Microsoft Teams, Confluence, and SharePoint.”
However, the report didn’t specify if hackers were able to get any sensitive information before Ubisoft shut the whole thing down.
Earlier this month, MongoDB Atlas also disclosed a breach. However, based on the company’s disclosure, it does not appear that these two incidents are related.
Ubiosoft’s previous data breaches
Ubisoft is a France-based video game publisher which is known for popular titles like — Assassin’s Creed, FarCry, Tom Clancy‘s Rainbow Six Siege, and the latest Avatar: Frontiers of Pandora.
In 2020, Ubisoft was breached by the Egregor ransomware gang. This group was successful in stealing and releasing portions of the company’s Watch Dogs title’s source code. In 2022, the company suffered a second breach that disrupted its games, systems and services.
Article originally posted on mongodb google news. Visit mongodb google news
MMS • RSS
Posted on mongodb google news. Visit mongodb google news
IFP Advisors Inc raised its stake in shares of MongoDB, Inc. (NASDAQ:MDB – Free Report) by 35.2% during the 3rd quarter, according to the company in its most recent Form 13F filing with the Securities and Exchange Commission (SEC). The institutional investor owned 1,494 shares of the company’s stock after buying an additional 389 shares during the period. IFP Advisors Inc’s holdings in MongoDB were worth $517,000 at the end of the most recent quarter.
Several other institutional investors also recently bought and sold shares of the stock. Moody National Bank Trust Division raised its holdings in MongoDB by 21.9% during the third quarter. Moody National Bank Trust Division now owns 1,641 shares of the company’s stock valued at $568,000 after buying an additional 295 shares in the last quarter. Spence Asset Management increased its stake in shares of MongoDB by 13.2% in the third quarter. Spence Asset Management now owns 832 shares of the company’s stock worth $288,000 after purchasing an additional 97 shares during the period. CWM LLC increased its stake in shares of MongoDB by 8.2% in the third quarter. CWM LLC now owns 2,591 shares of the company’s stock worth $896,000 after purchasing an additional 196 shares during the period. Vontobel Holding Ltd. increased its stake in shares of MongoDB by 5.8% in the third quarter. Vontobel Holding Ltd. now owns 4,224 shares of the company’s stock worth $1,461,000 after purchasing an additional 232 shares during the period. Finally, Carnegie Capital Asset Management LLC increased its stake in shares of MongoDB by 2.7% in the third quarter. Carnegie Capital Asset Management LLC now owns 2,145 shares of the company’s stock worth $742,000 after purchasing an additional 57 shares during the period. 88.89% of the stock is currently owned by institutional investors.
MongoDB Price Performance
MongoDB stock opened at $407.48 on Monday. The business’s fifty day moving average is $382.80 and its two-hundred day moving average is $379.65. The company has a debt-to-equity ratio of 1.18, a quick ratio of 4.74 and a current ratio of 4.74. The firm has a market cap of $29.41 billion, a P/E ratio of -154.35 and a beta of 1.19. MongoDB, Inc. has a 52 week low of $164.59 and a 52 week high of $442.84.
MongoDB (NASDAQ:MDB – Get Free Report) last issued its quarterly earnings results on Tuesday, December 5th. The company reported $0.96 earnings per share (EPS) for the quarter, topping the consensus estimate of $0.51 by $0.45. The business had revenue of $432.94 million for the quarter, compared to the consensus estimate of $406.33 million. MongoDB had a negative net margin of 11.70% and a negative return on equity of 20.64%. The firm’s quarterly revenue was up 29.8% on a year-over-year basis. During the same quarter in the prior year, the company earned ($1.23) EPS. Equities research analysts forecast that MongoDB, Inc. will post -1.64 earnings per share for the current fiscal year.
Analysts Set New Price Targets
A number of equities analysts have recently commented on MDB shares. Bank of America started coverage on MongoDB in a research note on Thursday, October 12th. They issued a “buy” rating and a $450.00 price objective on the stock. Truist Financial reiterated a “buy” rating and issued a $430.00 price objective on shares of MongoDB in a research note on Monday, November 13th. Stifel Nicolaus reiterated a “buy” rating and issued a $450.00 price objective on shares of MongoDB in a research note on Monday, December 4th. Macquarie lifted their price objective on MongoDB from $434.00 to $456.00 in a research note on Friday, September 1st. Finally, Morgan Stanley boosted their price target on MongoDB from $440.00 to $480.00 and gave the company an “overweight” rating in a research note on Friday, September 1st. One equities research analyst has rated the stock with a sell rating, two have given a hold rating and twenty-two have issued a buy rating to the company’s stock. According to MarketBeat.com, the company currently has an average rating of “Moderate Buy” and an average price target of $432.44.
Check Out Our Latest Report on MongoDB
Insider Activity
In other MongoDB news, CAO Thomas Bull sold 518 shares of the company’s stock in a transaction on Monday, October 2nd. The stock was sold at an average price of $342.41, for a total transaction of $177,368.38. Following the transaction, the chief accounting officer now owns 16,672 shares of the company’s stock, valued at approximately $5,708,659.52. The transaction was disclosed in a document filed with the SEC, which can be accessed through this link. In related news, CEO Dev Ittycheria sold 134,000 shares of MongoDB stock in a transaction dated Tuesday, September 26th. The stock was sold at an average price of $327.20, for a total value of $43,844,800.00. Following the completion of the sale, the chief executive officer now directly owns 218,085 shares in the company, valued at $71,357,412. The sale was disclosed in a filing with the SEC, which is available through the SEC website. Also, CAO Thomas Bull sold 518 shares of MongoDB stock in a transaction dated Monday, October 2nd. The stock was sold at an average price of $342.41, for a total value of $177,368.38. Following the completion of the sale, the chief accounting officer now owns 16,672 shares of the company’s stock, valued at $5,708,659.52. The disclosure for this sale can be found here. Insiders have sold 298,337 shares of company stock valued at $106,126,741 over the last three months. 4.80% of the stock is owned by insiders.
About MongoDB
MongoDB, Inc provides general purpose database platform worldwide. The company offers MongoDB Atlas, a hosted multi-cloud database-as-a-service solution; MongoDB Enterprise Advanced, a commercial database server for enterprise customers to run in the cloud, on-premise, or in a hybrid environment; and Community Server, a free-to-download version of its database, which includes the functionality that developers need to get started with MongoDB.
See Also
Receive News & Ratings for MongoDB Daily – Enter your email address below to receive a concise daily summary of the latest news and analysts’ ratings for MongoDB and related companies with MarketBeat.com’s FREE daily email newsletter.
Article originally posted on mongodb google news. Visit mongodb google news
MMS • Sergio De Simone
Article originally posted on InfoQ. Visit InfoQ
Google introduced its new Google AI SDK to simplify integrating Gemini Pro, its best-performing model to date, in Android apps. Using this SDK, developers need not build and manage their own backend infrastructure.
According to Google, Gemini Pro is their best model with features for a wide range of text and image reasoning tasks. Gemini Pro runs off-device, in Google’s data centers, and can be accessed through the Gemini API. The easiest way to use Gemini, says Google, is with Google AI Studio, a web-based tool that enables prototyping and running prompts in a browser. Once your results are satisfactory, you can export your model to code and use it in your preferred language, for example, Python, running on your backend.
For Android apps, Google is providing the Google AI client SDK for Android, which wraps Gemini REST API into an idiomatic Kotlin API. Using it, developers will not need to work directly with the REST API nor implement a server-side service for accessing Gemini models in Android apps.
The following snippet shows how you can generate text from a text-only prompt using the Google AI SDK:
val generativeModel = GenerativeModel(
modelName = "gemini-pro",
apiKey = BuildConfig.apiKey
)
val prompt = "Write a story about a magic backpack."
val response = generativeModel.generateContent(prompt)
print(response.text)
In addition to its text-only model, Gemini also provides a multimodal model able to generate text from text-and-image input (gemini-pro-vision
) and supports streaming for faster interactions. In this case, you would use generateContentStream
instead of generateContent
, as shown below:
var fullResponse = ""
generativeModel.generateContentStream(inputContent).collect { chunk ->
print(chunk.text)
fullResponse += chunk.text
}
To further simplify developers’ workflow, the latest preview of Android Studio introduces a new project template that will guide developers through the steps required to use Gemini Pro, starting with generating an API key in Google AI Studio.
Besides Gemini Pro, Google is also providing a smaller model, Gemini Nano, which can be run on-device. This enables applications where data should never leave the device and ensures predictable latency, even when the network is not available. Gemini Nano is available on select devices through AICore a new system service for Android 14 that aims to simplify incorporating AI in Android apps by taking care of model management, runtime, safety, and more.