Database Administrator (TS/SCI Clearance) (AHT) – Northrop Grumman – Wright, OH – Dice

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

At Northrop Grumman, our employees have incredible opportunities to work on revolutionary systems that impact people’s lives around the world today, and for generations to come. Our pioneering and inventive spirit has enabled us to be at the forefront of many technological advancements in our nation’s history – from the first flight across the Atlantic Ocean, to stealth bombers, to landing on the moon. We look for people who have bold new ideas, courage and a pioneering spirit to join forces to invent the future, and have fun along the way. Our culture thrives on intellectual curiosity, cognitive diversity and bringing your whole self to work – and we have an insatiable drive to do what others think is impossible. Our employees are not only part of history, they’re making history.

Northrop Grumman is seeking a Database Administrator/Principal Database Administrator with Neo4j or similar Graph Database experience at the National Air and Space Intelligence Center (NASIC) at Wright Patterson Air Force Base. This DBA will be an active part of a team of Database Administrators who will share in the administration of Neo4j, Oracle, MariaDB/MySQL, MongoDB, et al. This candidate will take lead on Neo4j administration activities and provide general support for the other database environments.

Responsibilities include:
Maintaining and enhancing Graph Database capabilities for the Intelligence Systems Infrastructure, Tools, and Enhancements (InSITE) program. Install and configure new Neo4j Graph database solutions and aid in maintenance, administration, and security for the new Neo4j database environment. Provide general support for other database architectures such as Oracle, MariaDB/MySQL, MongoDB, etc.

This requisition may be filled at a higher grade based on qualifications listed below.

Basic Qualifications for a Database Administrator:

  • Must have an active Department of Defense Top Secret/ Sensitive Compartmented Information security clearance
  • One of the following:
    • High school diploma with 7 years of relevant experience
    • Bachelor’s degree in a Science, Technology, Engineering or Math discipline with 3 years of relevant experience
    • Master’s degree in a Science, Technology, Engineering or Math discipline with 1 year of relevant experience

Basic Qualifications for a Principal Database Administrator:

  • Must have an active Department of Defense Top Secret/ Sensitive Compartmented Information security clearance
  • One of the following:
    • High school diploma with 10 years of relevant experience
    • Bachelor’s degree in a Science, Technology, Engineering or Math discipline with 6 years of relevant experience
    • Master’s degree in a Science, Technology, Engineering or Math discipline with 4 years of relevant experience

Preferred Qualifications:

  • Active CompTIA CASP+ or Security+ certification
  • Experience with Oracle, MariaDB/MySQL, or MongoDB Database administration
  • Neo4j or other graph database administration experience
  • Experience with Apache, NGINX, or JBoss Server administration

This position reports to Wright-Patterson AFB, OH, USA, however, this position can also be worked from Beavercreek, OH, OH, USA.

Salary Range: $70,100 USD – $105,100 USD
Salary Range 2: $86,300 USD – $129,500 USD

Employees may be eligible for a discretionary bonus in addition to base pay. Annual bonuses are designed to reward individual contributions as well as allow employees to share in company results. Employees in Vice President or Director positions may be eligible for Long Term Incentives. In addition, Northrop Grumman provides a variety of benefits including health insurance coverage, life and disability insurance, savings plan, Company paid holidays and paid time off (PTO) for vacation and/or personal business.

The health and safety of our employees and their families is a top priority. The company encourages employees to remain up-to-date on their COVID-19 vaccinations. U.S. Northrop Grumman employees may be required, in the future, to be vaccinated or have an approved disability/medical or religious accommodation, pursuant to future court decisions and/or government action on the currently stayed federal contractor vaccine mandate under Executive Order 14042 .

Northrop Grumman is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action Employer, making decisions without regard to race, color, religion, creed, sex, sexual orientation, gender identity, marital status, national origin, age, veteran status, disability, or any other protected class. For our complete EEO/AA and Pay Transparency statement, please visit . U.S. Citizenship is required for most positions.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Wipro, Persistent Systems open tech centres in Texas – YourStory

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Wipro opens 5G tech research centre in Texas

Wipro, a leading Indian IT services company, has opened its new 5G-Def-i Innovation Center (“the Center”) in Austin, Texas to help develop safer, more sustainable, and compliant products and services.

The centre will leverage Wipro’s 5G Def-i platform and provide fully integrated offerings for 24X7 product qualification, compliance, pre-certification, and interoperability testing with industry accreditation. It will provide a controlled environment designed to simulate real-world conditions, allowing clients to optimise the performance of 5G networks and devices.

Wipro engineers and researchers will use the Center to certify the compliance of 5G smartphones, tablets, modules, and IoT endpoints with rigorous standards for network accessibility and data performance. Additionally, the Center will play a role in qualifying the performance of 5G mobile network infrastructure, ensuring optimal functionality and efficiency.

The innovation centre will focus on services such as collaboration, research and development, and incubation to help customers develop new products and services to drive growth and success.

MongoDB unveils new education initiatives

MongoDB has announced new education partnerships and initiatives to enable upskilling opportunities for software developers. It has established distribution partnerships with Coursera and LinkedIn Learning to ensure that developers from traditionally underrepresented groups have an opportunity to gain skills with MongoDB Atlas, new partnerships with Women Who Code, MyTechDev, and Lesbians Who Tech & Allies will provide free certification to 700 developers.

In addition to these partnerships, the MongoDB for Academia programme now offers new benefits for educators such as free MongoDB Atlas credits and certifications. MongoDB University is releasing new online learning courses to reskill database administrators and professionals that use SQL—a querying language for relational databases—on how to take advantage of non-relational database technologies.

Persistent Systems opens new centre in Texas

Persistent Systems, a digital engineering company, has launched Private Equity (PE) Value Creation Hub in Plano, Texas. The centre will expand the company’s onshore footprint and strengthen its presence in the rapidly evolving PE market by providing expertise across the full asset lifecycle for global PE firms and its portfolio companies.

According to the company, the new centre will help PE firms to accelerate both top-line and efficiency-centric value creation levers for their portfolio companies. It aims to serve as a source of playbooks for secure corporate carve-out, rapid globalisation, cost rationalisation, and product acceleration.

Jindal Stainless to deploy Dassault Systemes solutions

Dassault Systèmes has announced that Jindal Stainless Limited (JSL) will deploy its solutions to strengthen its production planning, scheduling and execution processes and better meet customer demand.

JSL will deploy Dassault Systèmes’ “Operations Planning and Scheduling Excellence” industry solution experience based on the 3DEXPERIENCE platform, which leverages DELMIA applications, to optimise these processes virtually. JSL claimed to reduce costs and improve efficiency in order to realise significant benefits like reducing lead time by 10%-15% and work-in-progress inventory by 8%-10%.

It has recently expanded and doubled its production capacity at two facilities to 2.9 million tons of steel per year. Through this deployment, JSL aims to strengthen its position in core sectors like automotive and infrastructure while expanding its activity in the lifestyle, aerospace and defence sectors in a sustainable way.

AI information portal now available in Arabic

INDIAai, NASSCOM, and Oman-based AWJ Innovation have collaborated to launch the inaugural Arabic version of “AI for Everyone.” The publication, originally published in English by INDIAai—the National AI Portal of India, has gained recognition for enabling understanding of AI even for the layperson.

The collaboration between NASSCOM and AWJ Innovation aims to make this knowledge accessible to Arabic-speaking audiences, and to promote it across the Gulf Cooperation Council (GCC) region comprising Saudi Arabia, Kuwait, the United Arab Emirates, Qatar, Bahrain, and Oman. This initiative aims to enable native Arabic speakers to gain a foundational understanding of AI.

INDIAai and NASSCOM will play a vital role in guiding the content development process for the Arabic edition. AWJ Innovation, a leading technology and innovation company in Oman that delivers programmes and builds tech solutions, brings a deep understanding of the regional context when it comes to startups and AI deployment across industries.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


How Resilience Can Help to Get Better at Resolving Incidents

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

Applying resilience throughout the incident lifecycle by taking a holistic look at the sociotechnical system can help to turn incidents into learning opportunities. Resilience can help folks get better at resolving incidents and improve collaboration. It can also give organizations time to realize their plans.

Vanessa Huerta Granda gave a talk about a culture of resilience at QCon New York 2023.

Often, organizations don’t really do much after resolving the impact of an incident, Huerta Granda argued. Some organizations will attempt to do a post-incident activity, traditionally a root cause analysis or 5 Why’s, and some teams will do a postmortem. Either way, it’s usually focused on figuring out a root cause and preventing it from ever happening again, she said.

Huerta Granda mentioned reasons why folks aren’t doing activities for deeper learning:

  • The skills required to successfully apply resilience into your culture are not traditional engineering skills; it’s communication skills, analytics, presenting information and convincing people, getting folks to talk to each other.
  • You need time and training to get good at this and organizations often don’t give their engineers the bandwidth for this.
  • Many organizations will stop at the step of incident response without going into becoming a learning organization.
  • Some organizations are stuck in the old-fashioned pattern of thinking that all outages are there because of a root cause without focusing on the sociotechnical systems.

We can apply resilience throughout the incident lifecycle by taking a holistic look at the sociotechnical system, Huerta Granda said. She mentioned that we have to understand that an incident is never “release a bug – revert the bug – everything is back to normal”. Instead, think through the conditions that led to the incident happening the way it did. What did people think was happening? What tools did they have available? How did they collaborate and communicate? This paints a fuller picture of our systems and helps us in the future, Huerta Granda said.

Resilience can help folks get better at resolving incidents, better at understanding what is happening and how to more effectively collaborate with each other, Huerta Granda mentioned.

For the organization, when folks aren’t stuck in a cycle of incidents they will have time to complete the plans the organization has in their roadmap, she said.

To foster a culture of resilience, we need to give people the time to talk to each other, to be curious to look past the technical root cause and into the contributing factors around the experience of an incident, Huerta Granda concluded.

InfoQ interviewed Vanessa Huerta Granda about learning from incidents.

InfoQ: How big can the costs of incidents be?

Vanessa Huerta Granda: It can be huge; it can erode the trust your customers have in you, depending on the industry companies can lose their licenses because of incidents. And then there’s the cost it has on your culture, when folks are constantly stuck in a cycle of incidents, they’re not going to have the bandwidth to be creative engineers.

InfoQ: What tips do you have for creating action items?

Huerta Granda: Some tips are:

  • They need to be decided by the people actually doing them.
  • Management needs to be ok with giving folks time to complete them.
  • They should move the needle.
  • Always have an owner and a due date (so we know they can be completed).
  • It’s ok giving people an out.

Giving people an out means that action items should not be set in stone. If the owner of an item tries a fix and realises it doesn’t work or it will take way longer to complete, they can decide it’s not the best course of action. In that case, let them close the action items with an explanation of the work done.

InfoQ: How can we gain cross-incident insights?

Huerta Granda: You need to focus on individual incident insights first. When you have a body of work, then look at commonalities between your incidents; you may ask yourself ,”Are the incidents that take longer to resolve related to a particular technology? Are folks aware of the observability tools that are available?” Once you have found the data you want to share, make sure to always add context to the data that you are providing, this takes “data” into “insights”.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Global real estate giant drives digital transformation with MongoDB Atlas | VentureBeat

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

Presented by MongoDB


While Anywhere Real Estate Inc. itself isn’t a household name, its brand portfolio includes some of the most recognized brokerage brands and service businesses in real estate, such as Better Homes and Gardens, CENTURY 21, Coldwell Banker, Corcoran, ERA and Sotheby’s International Realty. Anywhere is the largest franchisor of residential real estate brands in the world, counting agents, brokers and home buyers and sellers among its customers.

Recently, the company has gone all-in on digital transformation and innovation on the backend to maintain its reputation and relevance across brands and customer cohorts.

“We are on a journey to build new agent, broker and consumer experiences, modernizing our platform to stay ahead of the market,” says Damian Ng, senior vice president of technology at Anywhere. “We’re a Fortune 500 organization with a multi-brand operating model encompassing some of the largest brands of the industry, so we’re tackling a lot of uniquely complex problems.”

The ultimate goal of these digital initiatives is delivering a better experience for end users, while propelling results for the businesses of both the company and its customers, says Brian Hanks, vice president of software engineering for Anywhere.

“We’re solving a set of consumer pain points with existing and new consumer and agent products, and we need to have the proper technical skills and partners to do that,” Hanks explains. “We’ve got a significant number of different business initiatives that range all the way from backend data analytics work to end consumer facing systems.”

Right now, Anywhere is leveraging MongoDB Atlas and Atlas Search for enhanced search capabilities and optimizing customer-facing tools to work faster, more efficiently and effectively, in a simple, seamless experience. Here’s a look at how the partnership has transformed the way Anywhere delivers.

Simplifying homebuying by eliminating tech complexity

The company’s main goal, Ng says, is to remove as much of the manual, day-to-day complexity around home buying and selling as possible. Agents and brokers, buyers and sellers alike all go through an enormous number of steps to complete a single real estate transaction.

“Much of our company strategy is about reducing as much of the administrative work as possible, across the life of a real estate transaction, whether that’s listing it, selling it or buying it,” he explains. “So, a lot of our focus is on modernization, both in terms of technology, by embracing a cloud-first ecosystem, but also in terms of our approach to building our tools.”

That means transitioning from a portfolio of individual tools to delivering a platform experience, in which all the tasks required by each person are right at their fingertips. All this is delivered without a hint of the complexity on the back end, where multiple capabilities, sometimes from multiple partners, make the magic happen seamlessly.

MongoDB Atlas as the data and development backbone

About five years ago, engineers initially integrated MongoDB Atlas into the Anywhere enterprise data platform as the backend data store. Soon they began to broaden its use, bringing it to individual applications, such as its marketing web platform, numerous internal enterprise business capabilities, data analytics processes and its newly revamped listing search service.

The different applications pull thousands of data sources into MongoDB. For instance, the marketing web platform uses MongoDB for online transactional processing (OLTP), enabling a large number of real-time database transactions by a large number of users. In other areas, such as the listing search service, a user’s search criteria is handled by the Atlas Search function and results are returned by an API call.

MongoDB’s document data model has been especially crucial for building cross-functional teams, making it easier for a given engineer to be truly full-stack. And as a multi-database organization, MongoDB is a critical component of the Anywhere microservices-driven architecture, where it allows each service or each team to make its own catalog data depository decisions while aligning to the enterprise guidelines.

“It means we don’t have to force everyone to use a single tool or solution,” Ng explains. “We’re able to equip teams with the information they need to optimize, and trust they make the right decisions.”

What sets MongoDB apart

When Anywhere was looking for a database, they wanted a hosted and managed platform to make operational and administrative overhead easier to handle. It was also essential that it be agile enough to not only handle rapid-fire changes but significantly speed up development.

“It’s flexible. It’s scalable. You can make changes on the fly,” Hanks says. “If you notice that something is going wrong after a recent release, it literally takes several minutes to make a change to the Atlas cluster to deal with the problem in the short term while you go back and potentially fix the root cause.”

He adds, “If you’re leveraging all of the different tools that come with MongoDB, Mongoose and the various libraries, you can have a working application very quickly, and you don’t have to worry about a lot of the detail that you would have to worry about with a traditional RDBMS,” he says.

Over the past two years Anywhere has started to deepen its relationship with MongoDB, as they began to accelerate their tech innovation. There are several goals, from training internal teams to work as a stronger engineering community, to partnering with MongoDB to innovate with new tools and continuously optimize and simplify its approaches to delivering a seamless platform to customers.

Taking search to the next level

A recent major project in the company’s digital acceleration tackled the Anywhere search capabilities, which are crucial to both internal and external user experiences. Previously, search experiences had been built with multiple vendor products stitched together with MongoDB at the backend. This led to a great deal more complexity than they needed as it increased the operational overhead with more to provision, secure, upgrade, patch, back up, monitor and scale. Additionally, it added unnecessary architectural complexity with the added difficulty of keeping data in sync between two separate systems. Both issues significantly added to the cognitive load of the Anywhere development teams.

“Given our scale and the flexibility we need on our search solution, because we leverage a common search capability across multiple real estate brands where each brand has unique complexity, we started reevaluating our choices,” Ng explained.

The team at Anywhere built multiple proof of concepts (POCs), or tests to verify that their databases can support proposed schema, queries, and, ultimately, expected volume and throughput. With that data, it became clear that MongoDB Atlas Search offers the capabilities required — while significantly reducing complexity — because Atlas Search eliminates the need to manage multiple technologies independently.

Instead of having to maintain a completely separate infrastructure, Atlas Search is built into Anywhere’s primary database, which means it’s not necessary to endure the overhead of keeping the search cluster data in sync with the primary data, because that happens automatically. Lag and results getting out of sync is no longer a worry, because every time the primary database is updated, whether that’s listing a new property or updating agent information, so is the search.

“By integrating the database, search engine and sync mechanism into a single, unified and fully managed platform, Atlas Search was the fastest and easiest way to build relevance-based search capabilities directly into our applications,” Ng says. “And the way it performs and how the search is implemented actually directly impacts what the end users can search for and how long it takes for them to get results.”

Since implementing Atlas Search, the team at Anywhere has observed a 60% improvement in response time for search results.

“It’s an exciting time to be on the tech team here at Anywhere as we’re undergoing a long-term digital transformation to modernize our offerings and architecture,” said Ng. “We’re focused on building delightfully simple and efficient digital experiences while empowering our consumers and affiliated agents with knowledge and insights. That all comes down to prioritizing engineer experience for our teams. Being an engineer at Anywhere offers the opportunity to build products and services at scale that are impacting hundreds of thousands if not millions of individuals on a daily basis.”

Learn more here about the ways the MongoDB Atlas developer data platform can accelerate and simplify how you build with data.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


MongoDB University Announces New Partnerships to Expand Education Outreach … – CXO Today

MMS Founder
MMS RSS

Posted on mongodb google news. Visit mongodb google news

MongoDB University partners with Women Who Code, MyTechDev, and Lesbians Who Tech & Allies to certify 700 developers

 

Partnership with LinkedIn Learning and Coursera makes MongoDB University content widely available to upskill millions of new learners

 

MongoDB, Inc today announced new education partnerships and initiatives to enable and empower future developers through education and help close the widening software-development skills gap globally. To ensure the training is accessible to more developers globally, MongoDB has established distribution partnerships with Coursera and LinkedIn Learning, both with networks of millions of global learners. And, to ensure that developers from traditionally underrepresented groups have an opportunity to gain skills with MongoDB Atlas, new partnerships with Women Who Code, MyTechDev, and Lesbians Who Tech & Allies will provide free certification to 700 developers. In addition to these partnerships, the MongoDB for Academia program now offers new benefits for educators such as free MongoDB Atlas credits and certifications. MongoDB University is releasing new online learning courses to reskill database administrators and professionals that use SQL—a querying language for relational databases—on how to take advantage of non-relational database technologies. To start learning with MongoDB University, visit learn.mongodb.com.

Whether it’s to compete against incumbents in their market or aggressively go after new opportunities, organizations across all industries are digitally transforming themselves by building their own software. However, despite the increased importance of developers to a company’s success, there is a lack of software engineers to meet this growing demand. Software developer roles are ranked as the number-one profession in the 2023 Best Jobs Report from U.S. News & World Report, which takes into account median salary, current job openings, and long-term demand. The backbone to every application is the database, and the choice of which database to use has a direct impact on not only the success of an application, but also how fast it can be built, deployed, and continually updated. Tens of thousands of customers and millions of developers rely on MongoDB every day as their preferred database to power applications because of its flexible data model, speed to deploy new features, and performance at scale. MongoDB has been named as one of the most desired database technologies for aspiring developers who are learning to code since StackOverflow introduced databases as a category in their Annual Developer Survey in 2017. The new initiatives announced today make it possible for new developers across the globe to quickly learn and become certified using MongoDB to build a wide variety of modern applications across high-demand industries:

  • Women Who Code is an international non-profit organization that provides community and programming assistance to women pursuing technology careers and career services and connects them with companies seeking professional developers. MongoDB University is partnering with Women Who Code to certify 100 members by the end of 2023.
  • MyTechDev is a non-profit organization focused on empowering African students by providing them with practical coding skills and specialization pathways in enterprise technologies. MongoDB University is partnering with MyTechDev to certify 500 people in Nigeria, South Africa, Kenya, and Egypt over the next two years. Software developers represent the highest-paying profession on the African continent according to research from Business Insider Africa.

 

  • Lesbians Who Tech & Allies is a community of LGBTQ women, non-binary, and transgender individuals in and around tech. MongoDB University is partnering with Lesbians Who Tech & Allies to certify 100 members beginning in October 2023.
  • LinkedIn Learning: MongoDB University developer courses have been added to LinkedIn Learning to reach a wide audience of global learners who are interested in broadening their software development skills. The MongoDB University courses include Introduction to MongoDB and additional courses for using MongoDB with the popular programming languages Java, Python, C#, and Node.js. These courses will be packaged into Learning Paths to prepare students for the MongoDB Associate Developer certification. Millions of people turn to LinkedIn Learning every day to get the skills they need to transform their careers, with LinkedIn members adding 446 million skills to their profiles over the last year alone. This partnership presents an opportunity to expand the reach of MongoDB University content and expose new learners to MongoDB to prepare themselves for a career in software development.
  • Coursera: MongoDB University’s Introduction to MongoDB course is available to more than 124 million global learners on the Coursera platform. Upon completion of the course, electronic certificates will be available to add to LinkedIn profiles or users can opt to receive a physical certificate. Learners on the Coursera platform are able to use official educational materials developed, maintained, and updated by experts at MongoDB.
  • Academia: In addition to upskilling and certifying those who are already in the tech industry, MongoDB is helping close the technology skills gap by educating and empowering the next generation of developers. The MongoDB for Academia program recently launched new program benefits for educators, including more than $400,000 of MongoDB Atlas credits, free certification, and access to free curriculum resources to prepare students with in-demand database skills and knowledge. The program also offers students MongoDB Atlas credits and free certification through the GitHub Student Developer Pack. These benefits are available globally and allow students to enter the workforce with industry-relevant skills and certifications.  
  • New content for upskilling existing professionals on MongoDB University: To support professional developers who want to broaden their skill sets, MongoDB University is releasing new online learning courses for database administrators and SQL professionals. The new MongoDB for SQL Professionals learning path will help those with a SQL background build on and expand their database knowledge and skill set so they can build applications using the most popular non-relational database in the world.

Since refreshing MongoDB University last November, more than 50,000 developers per month have taken advantage of the free, ungated courses with more than 600 individuals becoming MongoDB certified professionals.

“At MongoDB, we love developers and are pleased to provide free, on-demand educational content for new learners and professional developers who want to expand their existing skill sets on the learning platform of their choice,” said Raghu Viswanathan, Vice President, Education, Documentation, and Academia at MongoDB. “Along with providing a free and frictionless learning experience via our website, we’re excited to partner with LinkedIn Learning and Coursera to distribute our content to meet more learners where they are and partner with non-profit organizations to help train individuals from traditionally underrepresented groups.”

“Our goal is to empower diverse women to excel in technology careers with more than half of our 343,000 members identifying as software engineers,” said Alaina Percival, CEO at Women Who Code“MongoDB is one of the most popular database options for building modern applications, so we’re thrilled to help our members get certified using the official MongoDB University course curriculum.” 

“There is a huge demand for software development skills among African youth, but many don’t have access to inputs needed to master coding with a focus on enterprise technologies,” said Wilberforce Oshinaga, Director at Mytechdev“Getting students MongoDB certified in Nigeria, Kenya, Egypt, and South Africa will boost their knowledge and help them start or grow their careers.”

 

 

 

About MongoDB

Headquartered in New York, MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. Built by developers, for developers, our developer data platform is a database with an integrated set of related services that allow development teams to address the growing requirements for today’s wide variety of modern applications, all in a unified and consistent user experience. MongoDB has tens of thousands of customers in over 100 countries. The MongoDB database platform has been downloaded hundreds of millions of times since 2007, and there have been millions of builders trained through MongoDB University courses. To learn more, visit mongodb.com.

Article originally posted on mongodb google news. Visit mongodb google news

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Improving Developer Experience in a Small Organization

MMS Founder
MMS Ben Linders

Article originally posted on InfoQ. Visit InfoQ

A way to improve developer experience is by removing time-consuming tasks and bottlenecks from developers and from the platform team that supports them. How you introduce changes matters; creating an understanding of the “why” before performing a change can smoothen the rollout.

Jessica Andersson spoke about improving developer experience in a small organization at NDC Oslo 2023.

Andersson explained that developer experience includes all the things that a software developer does in their work for developing and maintaining software, such as writing the code, testing, building, deploying, monitoring, and maintaining:

I often think of this from a product perspective where a development team is responsible for the product life cycle. A good developer experience allows a developer to focus on the things that make your product stand out against the competition and deliver customer value.

Their strategy for increasing developer experience has been to remove time-consuming tasks and bottlenecks. They started out by unblocking developers. If a developer has to wait for someone outside their team in order to make progress, then they are not able to act as an autonomous team and take full ownership of their product life cycle, Andersson said.

Next, they looked at removing time-consuming tasks from the platform team. In order to be able to continue delivering a better developer experience to their developers they needed to make sure that the platform team wasn’t stuck in an endless upgrade and migration loop.

After having freed up time from the platform team, they shifted the focus to removing time-consuming tasks from the developers leading to an overall better developer experience.

Andersson mentioned that how you introduce changes matters and it’s easier to apply changes if you have created an understanding of the “why” before you do so. They introduced a quite different workflow for developers that they believed would be a great improvement, but met some resistance in adoption before the developers understood why and how it was an improvement:

In the long run, it turned into a very appreciated way of working, but the rollout could have gone smoother if we spent more effort on introducing the change before performing it.

You need to build the confidence with developers that you will deliver value to them, Andersson said. Having a good relationship with your developers is key to understanding their problems and how you can improve their daily lives, she concluded.

InfoQ interviewed Jessica Andersson about improving the developer experience.

InfoQ: What challenges did you face improving developer experience while being on a small team?

Jessica Andersson: We couldn’t do everything, and we couldn’t do it all at once. We aimed to take on one thing, streamline it and do it well, and once it was “just working” we could move on to the next thing.

We also had to be mindful of the dependencies we brought on and the tools we started using, everything needs to be kept up-to-date and there’s a real risk of ending up in a state of constant updates with no room for new improvements.

InfoQ: Can you give an example of how you improved your developer experience?

Andersson: For unblocking developers we had the context of using DNS for service discovery. DNS was handled manually and there were just two people who had access to Cloudflare, of which I was one. This meant that every time a developer wanted to deploy a new service or update or remove an existing one, they had to come to me and ask for help.

This was not ideal for how we wanted to work so we started looking into how we could handle this differently in the Kubernetes environment we were using for container runtime. We looked at the ExternalDNS project which allows for managing DNS records through Kubernetes resources.

For us it was really simple to get up and running and fairly easy to migrate the existing, manually-created DNS records to be tracked by ExternalDNS as well. Onboarding developers to the new process was quick and we saw clear benefits within weeks of switching over!

InfoQ: What benefits can a golden path or paved road provide for developers?

Andersson: It allows developers to reuse a golden path for known problems, for instance using the same monitoring solution for different applications. Another benefit is keeping the cognitive load lower; by applying the same way of doing things to different applications, it becomes easier to maintain many applications.

InfoQ: What’s your advice to small teams or organizations that want to improve developer experience?

Andersson: My strongest advice is to assess your own organization and context before deciding on what to do. Figure out where you can make an impact on developer experience, pick one thing and improve it! Avoid copying what others have done unless it also makes sense in your context.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Swift OpenAPI Generator Aims at Streamlining HTTP Client/Server Communication

MMS Founder
MMS Sergio De Simone

Article originally posted on InfoQ. Visit InfoQ

Apple has introduced a new open source package, the Swift OpenAPI Generator, aimed at generating the code required to handle client/server communication through an HTTP API based on its OpenAPI document.

Swift OpenAPI Generator generates type-safe representations of each operation’s input and output as well as the required network calls to deal with sending requests and processing responses on the client side and server-side stubs to delegate request processing to handlers.

Both the client and the server code are based on a generated APIProtocol type that contains one method for each OpenAPI operation. For example, for a simple GreetingService supporting HTTP GET requests at the /greet endpoint, the APIProtocol would contain a getGreeting method. Along with the protocol definition, a Client type implementing it is also generated for use on the client side. Server-side, the package generates a registerHandlers method belonging to the APIProtocol to register one handler for each operation in the protocol.

The generated code does not cover authentication, logging, or retrying. This kind of logic is usually too strictly associated with the business logic to allow for a general abstraction. Anyway, developers can implement those features in a middleware that conforms to the ClientMiddleware or ServerMiddleware protocols to be reusable in other projects based on Swift OpenAPI Generator.

The code Swift OpenAPI Generator generates is not tied to a specific HTTP framework but relies on a generic ClientTransport or ServerTransport type, which any compatible HTTP framework should implement to be usable with the generator. Currently, the Swift OpenAPI Generator can be used with a few existing transport frameworks, including URLSession from iOS own Foundation framework, HTTPClient from AsyncHTTPClient, Vapor, and Hummingbird.

All protocols and types used in the Swift OpenAPI Generator are defined in its companion project Swift OpenAPI Runtime, which is relied upon by generated client and server code.

The generator can be run in two ways: either as a Swift Package Manager plugin, integrated in the build process, or manually through a CLI. In the first case, the plugin is controlled by a YAML configuration file named openapi-generator-config.yaml that must exist in the target source directory along with the OpenAPI document in JSON or YAML format. Using this configuration file, you can specify whether to generate only the client code, the server code, or both in the same target. The CLI supports the same kind of configurability through command line options, as shown in the example below:

swift run swift-openapi-generator 
    --mode types --mode client 
    --output-directory path/to/desired/output-dir 
    path/to/openapi.yaml

The Swift OpenAPI Generator is yet in its early stages and while it supports most of the commonly used features of OpenAPI, Apple says, it still lacks a number of desired features that are in the works.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


QCon New York 2023: Day One Recap

MMS Founder
MMS Michael Redlich

Article originally posted on InfoQ. Visit InfoQ

Day One of the 9th annual QCon New York conference was held on June 13th, 2023 at the New York Marriott at the Brooklyn Bridge in Brooklyn, New York. This three-day event is organized by C4Media, a software media company focused on unbiased content and information in the enterprise development community and creators of InfoQ and QCon. It included a keynote address by Radia Perlman and presentations from these four tracks:

There was also one sponsored solutions track.

Dio Synodinos, president of C4Media, Pia von Beren, Project Manager & Diversity Lead at C4Media, and Danny Latimer, Content Product Manager at C4Media, kicked off the day one activities by welcoming the attendees and providing detailed conference information. The aforementioned track leads for Day One introduced themselves and described the presentations in their respective tracks.

Keynote Address

Radia Perlman, Pioneer of Network Design, Inventor of the Spanning Tree Protocol and Fellow at Dell Technologies, presented a keynote entitled, The Many Facets of “Identity”. Based on the history of practicing authentication methods, Perlman provided a very insightful look at how the phrase “the identity problem” may not be as well-understood. She maintained that “most people think they know the definition of ‘identity’…kind of.” Perlman went on to describe the many dimensions of “identity” including: human and DNS naming; how to prove ownership of a human or DNS name; and what a browser needs to know to properly authenticate a website. The theory of DNS is “beautiful,” as she described, but in reality, a browser search generally returns an obscure URL string. Because of this, Perlman once fell victim to a scam while trying to return her driver’s license. She then discussed how it is difficult for humans to properly follow password rules, questioned the feasibility of security questions, and recommended that people should use identity providers. Perlman characterized the Public Key Infrastructure (PKI) as “still crazy after all these years” and discussed how a certificate authority, a device that signs a message saying “This name has this public key,” should be associated with the registry from which DNS name is returned. She then described the problem with X.509 certificates such that Internet protocols use DNS names, not X.500 names. “If being able to receive at a specific IP address is secure, we don’t need any of this fancy crypto stuff,” Perlman said. She then compared the top-down and bottom-up models with DNS hierarchical namespaces in which each node in the namespace represents a certificate authority. Perlman recommended the bottom-up model, created by Charlie Kaufman circa 1988, because organizations wouldn’t have to pay for certifications. Also, there is still a monopoly at the root level and root can impersonate everyone in the top-down model. In summary, Perlman said that nothing is quite right today because names are meaningless strings and obtaining a certification certificate is messy and insecure. In conclusion, Perlman suggested to always start with the question, “What problem am I solving?” and to compare various approaches. In a humorous moment early in her presentation, she remarked, “I hate computers” when she had difficulty manipulating her presentation slides. Perlman is the author of the books, Network Security: Private Communication in a Public World and Interconnections: Bridges, Routers, Switches, and Internetworking Protocols.

Highlighted Presentations

Laying the Foundations for a Kappa Architecture – The Yellow Brick Road by Sherin Thomas, Staff Software Engineer at Chime. Thomas introduced the Kappa Architecture as an alternative to the Lambda Architecture, both deployment models for data processing that combine a traditional batch pipeline with a fast real-time stream pipeline for data access. She questioned why the Lambda Architecture is still popular based on the underlying assumption of Lambda: “that stream processors cannot provide consistency is no longer true thanks to modern stream processors like Flink.” The Kappa Architecture has its roots from this 2014 blog post by Kafka Co-Creator Jay Kreps, Co-Founder and CEO at Confluent. Thomas characterized the Kappa Architecture as a streaming first, single path solution that can handle real-time processing as well as reprocessing and backfills. She demonstrated how developers can build a multi-purpose data platform that can support a range of applications on the latency and consistency spectrum using principles from a Kappa architecture. Thomas discussed the Beam Model, how to write to both streams and data lakes and how to convert a data lake to a stream. She concluded by maintaining that the Kappa Architecture is great, but it is not a silver bullet. The same is true for the Lambda Architecture due to the dual code path making it more difficult to manage. A backward compatible, cost effective, versatile and easy to manage data platform could be a combination of the Kappa and Lambda architectures.

Sigstore: Secure and Scalable Infrastructure for Signing and Verifying Software by Billy Lynch, Staff Software Engineer at Chainguard, and Zack Newman, Research Scientist at Chainguard. To address the rise of security attacks across every stage of the development lifecycle, Lynch and Newman introduced Sigstore, an open-source project that aims to provide a transparent and secure way to sign and verify software artifacts. Software signing can minimize the compromise of account credentials and package repositories, and checks that a software package is signed by the “owner.” However, it doesn’t prevent attacks such as normal vulnerabilities and build system compromises. Challenges with traditional software signing include: key management, rotation, compromise detection, revocation and identity. Software signing is currently widely supported in open-source software, but not widely used. By default, tools don’t check signatures due to usability issues and key management. Sigstore frees developers from key management and relies on existing account security practices such as two-factor authentication. With Sigstore, users authenticate via OAuth (OIDC) and an ephemeral X.509 code signing certificate is issued to bind to the identity of the user. Lynch and Newman provided overviews and demonstrations of Sigstore to include sub-projects: Sigstore Cosign, signing for containers; Sigstore Gitsign, Git commit signing; Sigstore Fulcio, users authentication via OAuth; Sigstore Rekor, an append-only transparency log such that the certificate is valid if the signature is valid; Sigstore Policy Controller, a Kubernetes-based admission controller; and Sigstore Public Good Operations, a special interest group comprised of a group of volunteer engineers from various companies collaborating to operate and maintain the Sigstore Public Good instance. Inspired by RFC 9162, Certificate Transparency Version 2.0, the Sigstore team provides a cryptographically tamper-proof public log of everything they do. The Sigstore team concluded by stating: there is no single or one-size fits all solution; software signing is not a silver bullet, but is a useful defense; software signing is critical for any DevSecOps; and developers should start verifying signatures including your own software. When asked by InfoQ about security concerns with X.509, as discussed in Perlman’s keynote address, Newman stated that certificates are very complex and acknowledged that vulnerabilities can still make their way into certificates. However, Sigstore is satisfied with the mature libraries available to process X.509 certifications. Newman also stated that an alternative would be to scrap the current practice and start from scratch. However, that approach could introduce even more vulnerabilities.

Build Features Faster With WebAssembly Components by Bailey Hayes, Director at Cosmonic. Hayes kicked off her presentation by defining WebAssembly (Wasm) Modules as: a compilation target supported by many languages; only one .wasm file required for an entire application; and built from one target language. She then introduced the WebAssembly System Interface (WASI), a modular system interface for WebAssembly, that Hayes claims should really be known as the WebAssembly Standard Interfaces because it’s difficult to deploy modules in POSIX. She then described how Wasm modules interact with the WASI via the WebAssembly Runtime and the many ways that a Wasm module can be executed, namely: plugin tools such as Extism and Atmo, FaaS providers, Docker and Kubernetes. This was followed by a demo of a Wasm application. Hayes then introduced the WebAssembly Component Model, a proposed extension of the WebAssembly specification that supports high-level types within Wasm such as strings, records and variants. After describing the building blocks of Wasm components with the WASI, she described the process of how to build a component followed by a live demo of an application, written in Go and Rust, that was built and converted to a component.

Virtual Threads for Lightweight Concurrency and Other JVM Enhancements by Ron Pressler, Technical Lead OpenJDK’s Project Loom at Oracle. Pressler provided a comprehensive background on the emergence of virtual threads that included many mathematical theories. A comparison of parallelism vs. concurrency defined performance measures in latency (time duration) and throughput (task/time unit), respectively. For any stable system with long-term averages, he introduced Little’s Law as L = λW, such that:

  • L = average number of items in a system
  • λ = average arrival rate = exit rate = throughput
  • W = average wait time in a system for an item (duration inside)

A comparison of threads vs. async/await in terms of scheduling/interleaving points, implementation and recursion/virtual calls defined the languages that support these attributes, namely: JavaScript, Kotlin and C++/Rust, respectively. After introducing asynchronous programming, syntactic coroutines (async/await) and the impact of context switching with servers, Pressler tied everything together by discussing threads and virtual threads in the Java programming language. Virtual threads is a relatively new feature that was initially introduced in JDK 19 as a preview. After a second preview in JDK 20, virtual threads will be a final feature in JDK 21, scheduled to be released in September 2023. He concluded by defining the phrase “misleading familiarity” as “there is so much to learn, but there is so much to unlearn.”

Summary

In summary, day one featured a total of 28 presentations with topics such as: architectures, engineering, language platforms and software supply chains.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


OpenAI Announces Function Calling, Allowing Developers to Describe Functions

MMS Founder
MMS Daniel Dominguez

Article originally posted on InfoQ. Visit InfoQ

OpenAI has introduced updates to the API, including a capability called function calling, which allows developers to describe functions to GPT-4 and GPT-3.5 and have the models create code to execute those functions.

According to OpenAI, function calling facilitates the development of chatbots capable of leveraging external tools, transforming natural language into database queries, and extracting structured data from text. These models have undergone fine-tuning to not only identify instances where a function should be invoked but also provide JSON responses that align with the function signature.

AI models can intelligently interface with external tools and APIs thanks to the crucial role played by function calling. Developers can access a large selection of functionality and services by specifying functions to these models. By using external tools to respond to queries, search databases, or extract structured data from unstructured text, this connection enables AI models to accomplish tasks that are beyond their natural capacities. As a result of function calling, AI models become more versatile and effective instruments for tackling complex challenges in the real world.

With the introduction of gpt-4-0613 and gpt-3.5-turbo-0613, developers now have the ability to describe functions to these models. As a result, the models can intelligently generate JSON objects that contain the necessary arguments to call those functions. This exciting development offers a more dependable means of connecting GPT’s capabilities with external tools and APIs, opening up new possibilities for seamless integration.

These models have developed the capability to recognize situations where a function should be activated based on the user’s input through careful fine-tuning. Additionally, they have been taught to provide JSON answers that match the particular function signature. Developers can now more reliably and consistently get structured data from the model by using function calling.

In addition to function calling, OpenAI is introducing an enhanced variant of GPT-3.5-turbo that offers a significantly expanded context window. The context window, measured in tokens or units of raw text, represents the amount of text considered by the model prior to generating further text. This expansion allows the model to access and incorporate a larger body of information, enabling it to make more informed and contextually relevant responses.

Function calls in AI development allow models to utilize tools designed by developers, enabling them to extend their capabilities and integrate customized functionalities. This collaborative approach bridges the gap between AI models and developer-designed tools, fostering versatility, adaptability, and innovation in AI systems.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.


Azure API Center for Centralized API Discovery and Governance in Preview

MMS Founder
MMS Steef-Jan Wiggers

Article originally posted on InfoQ. Visit InfoQ

At the recent annual Build conference, Microsoft introduced the preview of Microsoft Azure API Center – a new Azure service and a part of the Azure API Management platform that enables tracking APIs in a centralized location for discovery, reuse, and governance.

With API Center, users can access a central hub to discover, track, and manage all APIs within their organization, fostering company-wide API standards and promoting reuse. In addition, it facilitates collaboration between API program managers, developers who discover and consume APIs to accelerate or enable application development, API developers who create and publish APIs, and other stakeholders involved in API programs.

Source: https://github.com/Azure/api-center-preview

The key capabilities of API Center include:

  • API inventory management centralized the collection of all APIs within an organization. These APIs can vary in type (such as REST, GraphQL, gRPC), lifecycle stage (development, production, deprecated), and deployment location (Azure cloud, on-premises data centers, other clouds).
  • Real-world API representation provides detailed information about APIs, including their versions, specifications, deployments, and the environments in which they are deployed.
  • Metadata properties enhance governance and discoverability by organizing and enriching cataloged APIs, environments, and deployments with unified built-in and custom metadata throughout the entire asset portfolio.
  • Workspaces, allowing management of administrative access to APIs and other assets with role-based access control.

Regarding the API Inventory management, Fernando Mejia, a senior program manager, said during a Microsoft Build session on APIs:

With API Center, you can bring your APIs from Apigee, such as AWS API Gateway or MuleSoft API Management. The idea is to have a single inventory for all of your APIs.

Mike Budzynski, a senior product manager at Azure API Management at Microsoft, explained to InfoQ the release roadmap of the API Center feature:

API Center is currently in preview for evaluation by Azure customers. At first, we are limiting access by invitation only. We plan to open it up for broader adoption in the fall.

In addition, in a tech community blog post, Budzynski wrote:

During the preview, API Center is available free of charge. Future releases will add developer experiences, improving API discovery and reusability, and integrations simplifying API inventory onboarding.

Microsoft currently allows a limited set of customers to access API Center through a request form.

About the Author

Subscribe for MMS Newsletter

By signing up, you will receive updates about our latest information.

  • This field is for validation purposes and should be left unchanged.