Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
DEFINE
The DEFINE command is used with the IDCAMS utility to create or define a new dataset
It's used to define the GDG base, which acts as a reference point for all related generations.
This base holds the properties like limit and deletion rules
NAME (gdg-base-name)
This is where you specify the name of your GDG base. It's the common name that will be shared by all the generation datasets.
LIMIT (nnn)
Defines the maximum number of generations that can exist under this GDG.
You can set any number from 1 to 255. Once the limit is reached, older generations are removed based on the options provided with EMPTY and SCRATCH
EMPTY
When this option is used and the generation limit is reached, all existing generations are deleted at once when a new generation is added.
This is useful when you want to clear out all old data regularly
NOEMPTY
With this option, only the oldest generation is deleted when a new generation is added, and the limit is reached.
This ensures that recent generations are still available and only the oldest one is removed
SCRATCH
When a generation is deleted, it is completely removed from the system both from the catalog and from disk storage.
You cannot access the dataset again once it's scratched.
NOSCRATCH
This option removes the dataset from the catalog only, but the file still exists on disk.
You can still access it later if you remember or know the exact name of the data
//CREGDG JOB (12345),CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP01 EXEC PGM=IDCAMS
//SYSIN DD *
DEFINE GDG(NAME(gdg-base-name) -
LIMIT(nnn) -
EMPTY|NOEMPTY -
SCRATCH|NOSCRATCH)
/*Mainframe Open Education (MOE) is sponsored by the Open Mainframe Project (OMP). The community partnership focuses on ‘open sourcing’ mainframe learning roadmaps and community knowledge transfer. To make it easier for hiring managers to have access to a plan and co-develop valuable assets – much of it in the minds and archives of our respective organizations.
The vision of MOE is to build a place for mainframe experts to share and consume the tribal knowledge they gained with years of experience. MOE board of experts will promote submissions within quality standards for the end user - all are cataloged in a consumable roadmap orientation. There are thousands of experts in the market to pass knowledge to the next generation. Join the project to learn about this community partnership and how to participate and contribute.
The MOE mission is to offer all mainframe users a platform to cultivate skill onboarding that is most critical to future hires and to allow for knowledge sharing and community contribution. We seek to align the best-of-breed foundational curriculum to cultivate new mainframe skills for today's hybrid data center. Specifically, MOE can help with the following aspects:
Mainframe eco-system collaboration is designed to support companies as they onboard new mainframe talent.
Create a community through shared ownership and a shared platform via the Open Mainframe Project.
Encourage market contributions of education assets
The project was designed based on five initial phases.
Our goal is to ensure we have the most relevant information organized in a logical and accessible way for the community.
This structure is a starting point where the can contribute and collaborate to develop and continuously improve the content library.
Contribute
Edit
Review
Deploy
Here are the five chapters of MOE content currently under development:
We encourage YOU to contribute to additional chapters to MOE!
Here you can find our MOE Project Management System. It is open for anyone to view and also to become an active member of this team.
Here is the retired Trello board that we used for project tracking and management (prior to 2024):
Copyright Contributors to Mainframe Open Education Project.
This project is licensed under a Creative Commons Attribution 4.0 International License (CC-BY-4.0), except for content from other sources where a different license is otherwise noted.
By submitting any contribution to this project, you certify that you accepts the Developer Certificate of Origin 1.1 as required by the Open Mainframe Project.
Develop learning roadmaps for job roles on the mainframe
Provide business leaders easy access to free foundational education
Address the faculty & university awareness gaps with new access to learning curriculum
Chapter 2 reviews technology training that is important to the path, including z/OS fundamentals, TSO / ISPF, JCL, asset management, etc.
This chapter reviews and describes the different roles one can be working on the Mainframe, including services, developer, etc.
Chapter 4 provides learners with in-depth module training based on the role chosen, including hands-on labs, interactive demonstrations, etc.
In this chapter, we delve into offering career and professional development opportunities tailored to mainframers at different experience levels.
We are a community-centric project and welcome the participation and contribution of all members of the mainframe community. Whether you would like to contribute content, or want to discuss existing content and how to improve it, we welcome and encourage the interaction.
What type of content can you contribute? Content comes in all forms, the only limitation is that it should be publicly available, attributed to the author, and not violate any copyrights. We welcome links to articles, infographics, courses, videos, blog posts, podcasts and more. All submissions will be reviewed by the MOE team and upon acceptance, merged into the workspace.
Review our before contributing your content.
Use the following invitation link to sign in as an editor:
Once your access is approved, there are two ways of contributing: . Please review the help documents to understand how these processes work.
Step 3) When creating a change request first click on the "Edit" button on the top right side of the site page. Next provide a description for your request. The field is located on the top left of the site page after you click "Edit." When submitting content through the change request, provide a high-level description of the content and the name of the content creator.
Note: Changes requests should be made by page. If you have suggestions to more than a page, it is required to create a Change Request per Page. Otherwise, your Change Request might be rejected.
For more detailed information on how to contribute to the project, please download our Contributor Quick Reference Guide below.


Learn about the Mainframe Open Education (MOE) project, including our mission, MOE project phases, our community, and more.
Welcome to Mainframe Open Education (MOE), proudly sponsored by the Open Mainframe Project (OMP). We're thrilled to have you join our vibrant community, where our partnership is dedicated to open-sourcing mainframe learning roadmaps and facilitating knowledge transfer.
We aim to create an inclusive space where everyone, including hiring managers, can easily access plans and collaborate on developing valuable assets. The wealth of information within our community spans minds and archives across various organizations, ensuring a rich and welcoming environment for all.
and connect with mainframers on Slack #omp-education-project
Follow us on Instragram @ MainframeOpenEducation
To start as a learner and contributor, please read MOE's , and .
Learn about personas that can contribute content and share their skills and learning experience.
Contribute with existing gaps and needs.
Provide feedback on the existing documentation
Discover new ways of building the content and learning experiences.
Consume the content for your own skill building
Contribute with existing gaps and needs.
Consume the content for your own skill building
Use existing content to accelerate students' skill building
Contribute with existing gaps and needs.
Consume the content for your own skill building
Validate your concepts and knowledge of Students
Contribute with existing gaps and needs.
Consume the content for your own skill building
Validate your concepts and knowledge by contributing and creating new content
Stay current and relevant to the community
Provide guidance to your employees on the exposure mainframe community enable you
Guide employees to resources and learning paths to expand their skills.
Encourage their employees to contribute content, and/or provide feedback.
Contribute with existing gaps and needs.
Expose your content, courses and assets to the community
Learn from the trainees and learners trends and needs
INTRODUCTION What is Enterprise Computing?
CHAPTER 1 What is a Mainframe Today?
CHAPTER 2 Foundational Technology
CHAPTER 3 Roles in Mainframe
CHAPTER 4 Deeper Dive in Role Chosen
CHAPTER 5 Career Path Opportunities
Learn more about the governance of Mainframe Open Education.
This project aims to be governed in a transparent, accessible way for the benefit of the community. All participation in this project is open and not bound to corporate affiliation. Participants are bound to the Code of Conduct.
The Editor role is the starting role for anyone participating in the project and wishing to contribute education assets.
Ensure your contribution is inline with the project's guidelines
Submit your contribution as a Change Request
Have your submission approved by a Reviewer
The Reviewer role enables the contributor to approve Change Requests, but also comes with the responsibility of being a responsible leader in the community.
The current list of active Reviewer can be seen at the .
Show your experience with the project through contributions and engagement on the community channels.
Voted to become a Reviewer by the majority of the currently active Reviewer.
Monitor Slack
Triage Discussions
Make sure that ongoing Change Requests are moving forward at the right pace or closing them
If a Reviewer is no longer interested or cannot perform the duties listed above, they should volunteer to be moved to emeritus status. In extreme cases, this can also occur by a vote of the Reviewers per the voting process below.
The MOE Project Lead is designated to the Technical Steering Committee. They are the primary point of contact for the project and representative to the Open Mainframe Project's Technical Advisory Council. The Lead(s) will be responsible for the overall project health and direction, coordination of activities, and working with other projects and committees as needed for the continued growth of the project.
Project releases will occur as soon as a Reviewer approves a Change Request. Any changes will be live thereafter.
In general, we prefer that Issues, Discussions and Reviewer membership are amicably worked out between the persons involved.
If a dispute cannot be decided independently, the Reviewers can be called in to decide an issue. If the Reviewers themselves cannot decide an issue, the issue will be resolved by voting. The voting process is a simple majority in which each committer receives one vote.
This project, just like all of open source, is a global community. In addition to the , this project will:
Keep all communication on open channels
Be respectful of time and language differences between the community members
Ensure tools are able to be used by community members regardless of their region
If you have concerns about communication challenges for this project, .
A digital badge is a simple but great way to acknowledge and share your contributions with your professional circle. You can attach your badge to a website, mail signature, or social network, and with one simple click, employers and other interested parties can easily view and verify your contributions.
Each badge has defined "Earning Criteria" and requirements for earning the specific badge. Additional badges will be created so be sure to check back often.
Must be a member of
Create 1 blog post/library entries on either:
Related to Mainframe foundational concepts to increase learning assets for the community, using blog post, videos, articles and existing or new reference.
Or share your mainframe journey, follow the .
It should be using Gitbook course structure –
Must be engaged with the community via our channels (Slack channels and Comments on Gitbook)
Must be a member of
Must be engaged with the community via our channels:
Engage in the Slack Channels
Attend SUG or Bi-weekly meeting
Must be a
Collaborate with other contributors and be part of from other contributors.
Demonstrate engagement via content/has engaged with content in the form of likes, follows and meaningful comments.
Minimum 2 years as a contributor and active member
In this world of fast changing technology industry, Mainframe Open Education (MOE) is needed to cater the problem of collection of education materials out today that includes the unique expertise of the mainframe community.
It is aimed to provide a community support platform where experts can share their knowledge and their materials. The Mainframe Open Education project offers a convenient, easy-to-use platform through which the experts can share their up-to-date materials and even make collaboration with the broader mainframe community.
It can also provide a clear learning path and fill the technology skills gap by offering materials at no cost or even low cost. And then open opportunities for community support and engagement, where experts and seasoned professionals can share their knowledge and best practices.
Review the proposed content in the GitBook
Create 3 blog post/library entries related to Mainframe foundational concepts to increase learning assets for the community, using blog post, videos, articles and existing or new reference. It should be using gitbook course structure – How to contribute
To understand the role of the mainframe, we begin our journey by exploring enterprise computing, the counterpart to personal computing.
In the fast-paced world of modern business, enterprise systems stand as the backbone of operations, facilitating critical functions and driving organizational success. But what exactly distinguishes enterprise computing from its counterpart, and how does it shape the technological landscape of today's global enterprises? Let's delve into the intricacies of enterprise systems and explore their profound impact on businesses worldwide.
If you have questions or suggestions regarding this project, please feel free to reach out to us on the Open Mainframe Project's Slack under the channel.
If you require an invite to the Slack workspace, please visit .
Mainframe Open Education project is committed to sharing and engaging the community by joining events around mainframe. Familiarize yourself with the previous and upcoming events and engage with us.
Stay tuned for more events!
IBM Tech Exchange @ Orlando, FL- October 2025
Session Topic: Transforming z/OS Software Management with an Open-First Approach Using Ansible and Zowe
Presenters: Jan Prihoda and Rose Sakach (Broadcom)
Session Topic: How to Contribute to Linux on the Mainframe
Presenters: Sarah Julia Kriesch (Kyndryl) and Elizabeth Joseph (IBM)
Session Topic: CBT Tape – The Treasure Trove of Mainframe Utilities
Presenters: Joe Winchester (IBM) and Reg Harbeck (Mainframe Analytics)
Session Topic: Galasa – Scaling Integration Testing for the Enterprise
Presenters: Louisa Denly (IBM)
Session Topic: Skill Up! Mainframe Learning for All
Presenters: Kathleen Nordstrom (Broadcom)
Session Topic: Unlock the Full Potential of Zowe Explorer – Essential Tips and Tricks AND What Mainframe Connection Protocols (FTP, SSH, or z/OSMF) Can I use for Zowe Clients?
Presenters: Dan Kelosky (Broadcom)
Session Topic: Mastering Mainframe Architectures – Transaction Processing on z/OS AND OpenTelemetry on Mainframe SIG: Progress Updates
Presenters: Rudiger Schulze (IBM)
Session Topic: Take your mainframe career to a new level by contributing to Zowe
Presenters: Dan Kelosky (Broadcom) and Rose Sakach (Broadcom)
Session Topic: Zelma – z/VM and Linux Modern Administration
Presenters: Mike MacIsaac (Sine Nomine Associates)
Session Topic: From Punch Cards to VS Code
Presenters: Wolfram Greis (European Mainframe Academy)
Session Topic: A not-to-technical deep dive into Feilong
Presenters: Aazam Thakur (Open Mainframe Project Ambassador) and Mike Friessenger (SUSE)
Mainframe Technical Exchange @ Prague - March 2025
Mainframe Technical Exchange @ Plano - September 2025
SHARE @ Kansas City - August 2025
Open Mainframe Summit @ Las Vegas - September 11 - 13, 2023
SHARE New Orleans 2023 - August 13 - 18, 2023
Mainframe Technical Exchange - October 4-6, 2022
SHARE Virtual Experience - August 9-13, 2021
SHARE Virtual Summit 2021 - March 2021
Student User Group Recordings: Mainframe Open Education Playlist
Techstrong TV Episode 24 (2023)
Share 2020 Mainframe Open Education Panel


Here you can find our valuable Reviewers who volunteered their time for the project.
Interested in giving back to the mainframe community? Interested in becoming a Reviewer? Nominate yourself now at https://forms.gle/9M1zssvrMTAegZKn8.
For more information on how to review submissions, please check out our quick reference guide on the process.
The Leads will review and vote on all nominations (Core Team access only) on a monthly basis.
Once approved you will be added to the Reviewer list and receive guidance from the Core Team/Leader below.
Learn what mainframes are and what place they take in your daily life.
Imagine waking up every morning, reaching for your smartphone, checking your email, and sipping on your favourite coffee. Little do you know, a silent powerhouse called a mainframe is working behind the scenes to make it all happen seamlessly. Think of a mainframe as the superhero of computers, capable of handling massive amounts of data and transactions.
While they might not be as flashy or portable as your smartphone, these machines power many of the things we rely on every day. Imagine all of the banking transactions, airline reservations, and government records that are moving around the world in a moment. That's the power of the mainframe!
Here are some real-life examples of where you might encounter mainframes:
When you swipe your credit card, the mainframe verifies your information and approves the transaction in milliseconds.
dahlbergra@vcu.edu
Lead
Roles in Mainframe
Viviane Sanches
vipadua@kyndryl.com
Lead
Deeper Dive in Role Chosen
Paul Newton
paulnewt@us.ibm.com
Lead
Career Path Opportunities
?
?
Lead
Additional Community Resources
Lauren Valenti
lauren.valenti@broadcom.com
Lead
.
yla@us.ibm.com
Robert Dahlberg
dahlbergra@vcu.edu
Lead
Jenn Francis
Guilherme Cartier
gcartier@br.ibm.com
Oleksiy Derbas
oleksiy.derbas@broadcom.com
Zeibura Kathau
zeibura.kathau@broadcom.com
Yury Demin
yury.demin@broadcom.com
Andrew Jandacek
andrew.jandacek@broadcom.com
Igor Kazmyr
igor.kazmyr@broadcom.com
Section
Reviewer Name
Reviewer Email
Position
Mainframe Open Education
Kelle Veverka
kelle.veverka@broadcom.com
Lead
What is a Mainframe
Lauren Valenti
lauren.valenti@broadcom.com
Lead
Deborah Carbo
deborah.carbo@broadcom.com
Viviane Padua
vipadua@kyndryl.com
Lead
Bob Dahlberg
Yvette LaMar
When you book a flight, your reservation is processed and your seat is confirmed, thanks to the mainframe's lightning-fast speed.
When you browse a massive online store, the mainframe retrieves millions of products and displays them on your screen in seconds.
These machines are robust, rugged, and always ready for action. You might not have heard about them, but they've been with you, quietly ensuring everything ticks smoothly from dawn till dusk. Join us on a journey to uncover the hidden marvels of the mainframe, a technological wizardry that's closer to your daily routine than you could ever imagine.
Some folks might dismiss mainframes as ancient relics, assuming they're outdated technology. But before we start exploring, let's take a moment to unravel the mystery as Rosalind, an IBM fellow, shares her insights on the subject.
Watch the video "Mainframe Myths Debunked" by Rosalind Radcliffe, and answer the following questions:
How does the introduction of the Telum processor challenge the perception of mainframes being "old" systems?
In what ways does the script debunk the myth that mainframes are "expensive"?
How does the transaction processing efficiency of mainframes contribute to their modern capabilities?

Exploring the Nexus of Traditional Power and Modern Agility
In the ever-evolving landscape of technology, two formidable forces have emerged as pillars of computing infrastructure: the Mainframe and the Cloud. Each represents a distinct era in the evolution of computing, yet their convergence in today's digital ecosystem is reshaping the way organizations approach data processing, storage, and management.
Please refer to the article "Mainframe vs. Cloud: Computing for the Future" by Kingson Jebaraj. It comprehensively analyzes Mainframe and Cloud Computing, detailing their similarities, differences, advantages, and disadvantages through informative comparison tables. Additionally, it outlines essential factors to consider when deciding between Mainframe and Cloud Computing solutions.
Having explored the distinctions, advantages, and considerations between Mainframe and Cloud Computing, it's crucial to understand the continuing relevance of mainframe technology in today's cloud-centric world. The forthcoming article, "Why Mainframes Matter in the Age of Cloud Computing," delves into the enduring significance of mainframes, underscoring their unparalleled reliability, security, and performance capabilities that, even in an era dominated by cloud solutions, remain critical for the operations of large organizations and industries worldwide. This piece will illuminate how, despite the surge in cloud computing, mainframes continue to be a foundational technology, seamlessly integrating with modern infrastructures to support the complex demands of contemporary computing.
Mainframe holds the irreplaceable role and benefits of mainframe systems amidst the cloud computing revolution. we transition to a broader perspective with our next article, "Cloud or Mainframe? The Answer is Both." This piece addresses the evolving landscape of technology where the growth trajectories for both mainframes and cloud computing are both parallel and interconnected. It emphasizes the synergistic potential that harnessing both technologies offers, illustrating how businesses and large organizations can leverage the unique strengths of mainframes and cloud platforms to achieve superior efficiency, scalability, and innovation. This article is a pivotal read for understanding the complementary dynamics between mainframe robustness and cloud agility, guiding readers through the strategic integration of these technologies for future-proofing IT infrastructure.
Enterprise computing refers to the use of computer systems and software within a large organization or enterprise to handle various business processes and tasks. It involves the deployment and management of extensive IT infrastructure to support the diverse needs of a business, ranging from data storage and processing to communication and collaboration.
One fundamental aspect of enterprise computing is the establishment of a robust and scalable network infrastructure. This network serves as the backbone for connecting different departments, locations, and employees, enabling seamless communication and data transfer. The scale of enterprise computing often requires sophisticated solutions such as virtual private networks (VPNs), firewalls, and other security measures to protect sensitive data and ensure the integrity of the network.
Enterprise computing also involves the deployment of centralized databases and servers to store and manage vast amounts of data generated by the organization. This data can include customer information, financial records, inventory details, and more. The ability to efficiently store, retrieve, and process this data is crucial for the smooth functioning of the enterprise.
In addition to infrastructure, enterprise computing encompasses various software applications tailored to meet specific business needs. Enterprise Resource Planning (ERP) systems, for example, integrate various business processes like finance, human resources, and supply chain management into a unified platform. Customer Relationship Management (CRM) software helps organizations manage and analyze customer interactions and data throughout the customer lifecycle.
How businesses benefit from the use of mainframes
Curious about who relies on mainframe technology? You may not realize it, but mainframes are part of our everyday lives. Have you ever used an ATM to manage your bank account? If so, you've interacted with a mainframe computer.
Mainframes hold a significant role in the modern business landscape, particularly within the world's largest corporations. While various forms of computing are integral to business operations, mainframes remain essential in today's e-business environment. They are the backbone of many critical sectors, such as banking, finance, healthcare, insurance, utilities, government, and countless other public and private enterprises. Let's dive deeper into the world of mainframes and discover their diverse applications.
Read the chapter "Who uses mainframes and why do they do it?" below and answer the following questions:
Who are the primary users and beneficiaries of mainframe technology in modern business?
As technologies evolve, businesses are looking for ways to modernize their mainframes to make the most of their impressive capabilities. But how are organizations approaching this transformation? What benefits do they wish to gain, and what challenges and risks do they face?
To answer these questions, Kyndryl commissioned Coleman Parkes Research to survey 500 enterprises that rely on mainframes. The survey results showed:
99% of businesses are taking a hybrid approach to mainframe modernization
Businesses are moving 37% of their application portfolio off mainframe.
Enterprises reported a 9-11% increase in profits after mainframe modernization.
To discover the essentials of enterprise computing and learn why Mainframes are crucial to the success of global businesses. This brief introduction, presented by Lucan Sahn from IBM, highlights the reliability, security, and scalability of Mainframes, showcasing their role as the cornerstone of industry leaders:
Overall, enterprise computing is a holistic approach to managing and optimizing the information technology resources of a large organization. It aims to enhance efficiency, productivity, and collaboration while ensuring the security and integrity of the organization's digital infrastructure.

What are the fundamental strengths and features make mainframes essential in contemporary information processing?


To report any violations or concerns, contact conduct@openmainframeproject.org.
A breakdown of how mainframes work
IBM mainframes are not just computers; they're technological marvels. From running operating systems like Linux® and IBM z/OS® to orchestrating massive simultaneous transactions and ensuring top-tier security, these machines are at the forefront of innovation.
Join us as we delve into the intricacies of mainframe engineering—uncovering their capacity on demand, shared memory dynamics, and the impressive execution of secure web transactions. We'll unravel the layers of redundancy that make them resilient in the face of extreme conditions.
Watch the video "What is a Mainframe" by Dr. Philipp Brune, and answer the following questions:
What are the key differences between the mainframe paradigm and the cloud or grid paradigm regarding architecture and scalability?
How does the shared memory architecture of mainframes contribute to their suitability for applications requiring high transaction security and efficient information sharing among parallel transactions?
In what ways does the hardware architecture of mainframes, originating from the IBM S/360 generation, demonstrate a unique aspect through its full backward compatibility despite the evolution in appearance?
As you can see, IBM mainframes are uniquely engineered to:
Run common operating systems like , specialized operating systems such as IBM , and software that takes advantage of unique hardware capabilities.
Support massive simultaneous and throughput (I/O) with built-in capacity on demand and built-in shared memory for direct application communication.
Deliver the highest levels of with built-in cryptographic cards and innovative software. For instance, the latest mainframes can execute up to 1 trillion secure web transactions per day and manage privacy by policy.
Learn about mainframe architecture and take a virtual tour inside a mainframe machine.
Beyond the imposing presence of the big black box lies the intricate and sophisticated physical architecture of this technological marvel. But it's not just about the hardware; it's about unveiling the next generation of the world’s most powerful transaction system—IBM Z.
IBM Z introduces a groundbreaking encryption engine capable of executing more than 12 billion encrypted transactions per day. This engine is a game-changer, enabling pervasive encryption of data associated with any application, cloud service, or database all the time. Join us on this rapid journey behind the scenes as we explore the magic that transforms these physical components into the powerhouse of IBM Z. Let's witness the assembly of innovation!
Previously, the term mainframe was synonymous with the hardware (S/360, S/370, S/390 chips) and software (MVS, VM, VSE). However, nowadays, the term mainframe has a more complex meaning. The term mainframe must be divided into two meanings: (1) the hardware (Z-Series chip) and (2) the operating systems supported by the Z-Series chip (z/VM, z/OS, z/VSE, Linux).
Mainframe Hardware: Z-Series CPU chips: Supports z/VM, z/OS, z/VSE, Linux operating systems, and open-source software. Also supports Hybrid Cloud technology. Mainframe hardware includes outboard I/O and encryption CPUs, hyperchannels, redundant power supplies, and resilience.
Mainframe Software: Z-Series software is traditionally called VM, z/VM, MVS, MVS/XA, MVS/ESA, z/OS, VSE or z/VSE. These operating systems have evolved and been used over the last 50 years. Mainframe software has evolved from a purely batch-oriented system using only JCL to incorporating real-time interfaces and Unix System Service. Mainframe software has progressed to being compatible with distributed systems, with being the most notable innovation.
Embark on a fascinating virtual journey as we extend an exclusive invitation to enter the intricate world of mainframe hardware. In the immersive virtual experience you will experience below, we bring the mainframe to life, allowing you to understand its contemporary physical architecture intimately. Join us on this captivating virtual tour and witness the inner workings of the modern mainframe.
Here is the navigation instruction for your tour:
From the link above, choose the “Run Online/Web” option and choose “System/Server”.
Pick IBM Z for the latest mainframe. Then you just need to click on the latest machine type, which is at the top of the list, once you join in the machine, find “Explore Product Animation” where you can navigate through all the machine details.
In the intricate tapestry of modern computing, the concept of Hybrid Cloud stands out as a pivotal thread weaving together the resilience of mainframe systems with the dynamic possibilities of cloud computing. As we embark on this exploration, we unveil the synergy that arises when traditional meets contemporary, presenting a comprehensive understanding of how Hybrid Cloud architectures revolutionize the landscape of mainframe computing, empowering enterprises to navigate the digital realm with unparalleled efficiency and innovation.
Use the following questions as guidance, and read the blog "Mainframe is a Part of Your Cloud Strategy. Now What?" by Matt Hogstrom.
How can businesses leverage Mainframe data to gain a competitive advantage in the market?
How does a Hybrid Cloud architecture enhance the capabilities of both the Mainframe and Cloud, and how does it differ from a Cloud-only strategy in modernizing infrastructure?
What are the three proven approaches outlined in the blog for successfully integrating Mainframe workloads into a Hybrid Cloud, and how do they contribute to the overall value of the IT investment?
Use the following questions as guidance, and read the blog "The Future of Hybrid Cloud" by Matt Hogstrom.
How does the Hybrid Cloud model synergize the strengths of mainframes with the agility and scalability of cloud services, and why is this integration crucial for achieving organizational success in today's competitive landscape?
In what ways does the concept of an "Open-First" approach, leveraging open APIs, command line interfaces, and modern open-source technologies, simplify the integration and extension of mainframes within a hybrid cloud environment? How does it impact the development experience and flexibility for both mainframe and cloud programmers?
As highlighted in the brief history and timeline section, the mainframe has a legacy dating back to the 1950s. While its long presence might give the impression of antiquity, it remains a dynamic and evolving field. Some may be familiar with mainframes, associating them with history, while others might be encountering the term for the first time. Despite its age, the mainframe continues to be a thriving domain, from its early contributions to NASA studies to its contemporary role in facilitating a significant portion of our daily transactions.
Compare mainframe and server.
Before we plunge into the technical intricacies of mainframes, let's kick off this tech journey with something we're likely more acquainted with servers.
Below, you'll find a concise yet all-encompassing infographic that puts servers head-to-head with mainframes from the perspective of the end users. Please take a moment to dive into the details, and once you've absorbed the visual feast, there are some additional readings may find interesting right below the link, followed by three reflective learning questions.
Additional readings:
① Investing in the mainframe as part of modernizing in place is much more cost-effective than complete modernization outside the mainframe. Read More
② Moving mainframe workloads to the cloud could cost 5.21x more in total cost of ownership. Read More
③ For top-performing organizations, there is a direct and positive correlation between their mainframe investment and their overall performance relative to others. Read More
What specific features make mainframes more suitable for applications demanding the highest levels of security, and how do these features differ from those found in servers?
Analyze the factors that contribute to the exceptional uptime of mainframes, such as the z/OS operating system, and compare it to the challenges servers face in achieving similar levels of continuous operation. How does the difference in downtime affect the overall performance and maintenance capabilities of each system?
Explore the trade-offs between the initial investment in mainframes and the potential long-term savings in TCO. In what scenarios would the higher upfront cost of mainframes be justified, and how does the TCO calculation differ for distributed server environments?
Industry education Marc Smith narrates this presentation about understanding the mainframe, its importance to the industry, who is using it and how, and what career opportunities are available.
Learn how and why mainframe is used by businesses.
You might not know the mainframe, however, it is part of your daily routine. Chances are you are not a direct user of mainframes. Still, you are an end user of most of the financial, enterprise, retail, and data management transactions that are taking place throughout the day. Given how competitive markets are and the wide variety of services/providers available to clients, one unavailability or transaction denied might be a trigger for a client to switch for good to another competitor.
Let's imagine a practical example from an end user perspective: as a consumer, you probably hold more than a credit card - still you have your favorite option used on a regular base. If you happen to experience a situation where your main card does not complete a transaction in an important moment, you tend to offer one of your other cards. If the new transaction process works well, you get satisfied - and chances are high that you will keep using this card which delivers a smooth purchase experience. Imagine now the other side of this story: would you like to be the company whose transaction experienced problems? This is definitely a bad nightmare scenario for a transaction credit card company.
All types and sizes of businesses and organizations can leverage mainframe as the foundation technology to support their business, specially for their mission-critical applications. The most common are:
Big organizations that manage enterprise-wide mission-critical ERP operations
Banks that handle trillions of transactions
Backend operations of many digital applications
We invite you to watch the video below to understand why enterprises use the mainframe, the key differences between the mainframe and other platforms, and the different roles and personas in the mainframe world. It shows you the enduring significance of the mainframe in technology's evolving landscape, showcasing its resilience and pivotal role in today's business strategies. With a staggering 76 percent acknowledgment from business leaders and projections indicating a robust 63 percent growth in MIPS, the mainframe remains a foundational element in IT.
Amid the dynamic digital economy, mainframes support the exponential growth of mobile transactions, playing a pivotal role in diverse industries' success. Digital disruptors leveraging mainframes generate 2.5 times more profit, solidifying the mainframe as a cornerstone of innovation and efficiency in the digital age.
Watch the video below and use the following questions for your guidance:
How do recent survey results reflect the current perception of mainframes among business leaders, and what factors contribute to the sustained importance of mainframes in the evolving IT landscape?
In what ways does the IBM Z13 contribute to the enduring success of mainframes, and how do global studies, particularly those with over 324,000 IT customers, emphasize the efficiency and economic advantages of mainframes in large-scale business computing?
Explore the studies conducted by Dr. Howard Rubin regarding mainframes, focusing on their environmental impact, cost-effectiveness, and comparative advantages over distributed servers. How do these findings challenge common misconceptions and contribute to the ongoing reliance on mainframes in various industries?
Offer resiliency through multiple layers of redundancy for every component (power supplies, cooling, backup batteries, CPUs, I/O components, cryptography modules) and testing for extreme weather conditions. source: ibm
Mainframe has evolved from S/360 till z16 and has been always relevant. Current days mainframe supports all the technologies and is the most sophisticated platform on the planet.
As highlighted in the brief history and timeline section, the mainframe has a legacy dating back to the 1950s. While its long presence might give the impression of antiquity, it remains a dynamic and evolving field. Some may be familiar with mainframes, associating them with history, while others might be encountering the term for the first time.
Some of the key trends shaping the future of mainframes:
Virtualization: As mainframes become more virtualized, they can be shared by various users and applications.
Cloud computing: Cloud-based services like online banking and e-commerce are supported by mainframes.
Big Data: Processing and analyzing massive amounts of data is a good fit for mainframes.
Security: Mainframes are known for their security features, which makes them ideal for protecting sensitive data.
Despite its age, the mainframe continues to be a thriving domain, from its early contributions to NASA studies to its contemporary role in facilitating a significant portion of our daily transactions. The evolution of mainframes showcases a remarkable journey of innovation, adaptability, and longevity.
Understanding the EXEC Statement
The EXEC statement serves as the starting point for each individual step within a job or procedure. Its primary role is to specify the program or procedure—whether cataloged or in-stream that the step will execute. Additionally, it provides instructions to the system on how to handle the execution of that step. A single job can include up to 255 steps
Syntax
//STEPNAME EXEC PGM=program-name
STEPNAME: A unique label for the step.
PGM: The name of the program or utility to execute.
Example
//STEP1 EXEC PGM=SORT
Here, the SORT utility is executed in the first step of the job.
If you're new to mainframe computing, Job Control Language (JCL) is a key skill you'll need. Think of it as the set of instructions you give the mainframe to execute specific tasks, like running programs or managing datasets. It doesn’t perform the calculations or logic itself but tells the system what to do, step by step. Whether you’re creating files, processing payroll data, or managing daily logs, JCL acts as the blueprint for the system to follow.
What is JCL and Why Is It Important?
JCL is short for Job Control Language. It’s the language used to communicate with IBM mainframe systems. Unlike general-purpose programming languages, JCL is specific to defining jobs a sequence of steps where each step performs a distinct function.
For example:
Step 1: Read input data from a file.
Step 2: Run a program to process the data.
Step 3: Write the output to a new file.
Without JCL, mainframes wouldn’t know what tasks to perform, which resources to allocate, or where to store the results.
Breaking Down a JCL Job
Every JCL script is structured into three main sections:
JOB Statement: Introduces the job to the system and specifies its overall parameters.
EXEC Statement: Defines the program or procedure to run in each step.
DD (Data Definition) Statements: Specify the datasets and system resources needed for each step.
Example:
//MYJOB JOB (12345,67890),'Demo Job',CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IEFBR14
//MYDATA DD DSN=MY.TEST.FILE,DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(5,5)),UNIT=SYSDA,LRECL=80,RECFM=FB
What’s Happening Here?
The JOB Statement introduces the job and sets global parameters, such as:
CLASS=A: Assigns the job a priority.
MSGCLASS=X: Specifies where logs are sent.
NOTIFY=&SYSUID: Notifies the job submitter upon completion.
The EXEC Statement
specifies that the IEFBR14 utility is being executed. This utility doesn’t do much other than allocate or delete datasets, but it’s commonly used for testing.
The DD Statement defines a new dataset called MY.TEST.FILE
DISP=(NEW,CATLG,DELETE): Creates the dataset, catalogs it on success, and deletes it if the job fails.
SPACE=(CYL,(5,5)): Allocates space for the dataset in cylinders.
LRECL=80 and RECFM=FB: Define the record length and format.
Zowe, the integrated and extensible open source framework for z/OS, combines the past and present to build the future of mainframes. Like Mac OS, Windows, and others, Zowe comes with a core set of applications out of the box in combination with the APIs and OS capabilities future applications will depend on.
Zowe offers modern interfaces to interact with z/OS similar to what you may experience on cloud platforms today. You can use these interfaces as delivered or through plug-ins and extensions that are created by clients or third-party vendors.
Learn about Zowe architecture, components, and how to quickly get started with Zowe. Read about what's new and changed in the Release Notes, and FAQs along with thorough documentation to:
Setup and use
Extend
Troubleshoot
Contribute
Discussion about what mainframe modernization mean and what YOU can contribute
Mainframe modernization is a critical topic that has evolved alongside the development of mainframe applications and systems. But what does it truly mean to modernize a mainframe? It's a question that often eludes a clear, comprehensive definition. In this subchapter, we've gathered objective and insightful discussions from the Mainframe Open Education community, distilling the key points to help you grasp this essential concept. Whether you're new to mainframes or looking to deepen your understanding, this overview aims to provide a clear and thorough exploration of what modernization means in the context of mainframe technology.
In this article by IBM, you will get a basic understanding of what mainframe modernization is and why modernization is such an essential topic for mainframe:
In the article 'So What Does ‘Mainframe Modernization’ Really Mean?' Allen Zander outlines seven key facets of mainframe modernization, offering a valuable framework for understanding this complex topic. These facets include cost modernization, professional skills, interface modernization, progressive modernization, code modernization, performance modernization, and transparency modernization.
We will adopt this framework for the remaining part of this subchapter to aggregate the discussions and content.
Watch this video to see why young people and the mainframes mix well.
Here is an article introducing How Mainframe App Development Is Changing bu using GenAI to DevSecOps:
Here is a high-level overview of tools and technologies regarding their return on investment and level of effort that can be leveraged into the modernization strategy:
To truly understand what modernization is, it is essential to emphasize the non-migration nature of mainframe modernization, as mentioned by several mainframe technical and business leaders:
This article provides step-by-step guidance at a high level about mainframe modernization strategies:
Here is a series of executive conversations led by Broadcom and Meet the Boss, focusing on business transformation and modernization:
This article provides an example of how organizations can gain a deep understanding of legacy systems, enabling informed decision-making during their modernization project:
AI could also play an essential role in mainframe modernization:
It's undeniable that today's mainframe needs to be modernized to meet the evolving demands of the future. With countless approaches available, it's important not to let others dictate what mainframe modernization should mean for you. Instead, discover the approach that best fits your needs and actively contribute to the ongoing evolution of the mainframe. Remember, achieving success in this field takes a collective effort, and your insights and contributions can make a significant difference!
Who drives modernization conversation within an org? Tools, methodologies, organizational modernization. Whatever you are doing in Z
All the efforts going into the open-source effort might be transparency discussions as well.
Effort trying to make MF look like anything else might not be the way.
Microsoft + Greg talk about MF (planet mainframe): critical role. 5 years ago everything is cloud. Now is cloud maybe:
Learn about the mainframe history.
The evolution of human computation, from ancient tally sticks to the modern marvels of technology, traces a fascinating journey. The series of videos below delves into the rich tapestry of computing history, from Stonehenge and abacuses to the revolutionary era of mainframes. Alan Turing's groundbreaking algorithms with the concept of a stored program computer and the commercial dawn of computers in the mid-20th century set the stage for a transformative period in business computing.
Amid the cultural upheavals of the 1960s, IBM's 360 emerged as the harbinger of modern mainframes, reshaping business computing and empowering endeavors like NASA's space missions.
The mainframe's impact on real-time transaction processing in the 1970s laid the foundation for critical systems like credit card authorizations and airline reservations.
The mainframe was the outgrowth of a IBM's cultural commitment to responsible computing and architecture built around business needs. IBM's promise, made in 1964, that whatever you wrote for their mainframe would continue to run into the future has been faithfully kept. The ability to run legacy workloads without constant upgrades is a testament to the mainframe's enduring legacy. The 1980s and 1990s were a time of growth, expansion, and response to the challenges posed by distributed computing. The mainframe persisted, adapting to change and providing reliable, secure, and scalable computing.
As the 21st century began, the internet and global commerce expanded exponentially, emphasizing the need for security, reliability, and manageability—the mainframe's forte. Globalization and the push for cloud computing posed challenges, but the mainframe's attributes of security, scalability, and reliability remained unmatched. The introduction of Linux to the mainframe platform showcased its adaptability to new technologies. The mainframe's role in business computing prevailed as organizations realized its unique capabilities in handling massive amounts of data and transactions.
The mainframe community anticipates continued growth and relevance in the coming years, with a new generation of professionals entering the field through educational initiatives. The mainframe's ability to provide a secure, scalable, and cost-effective platform positions it as a cornerstone of the IT ecosystem. The mainframe's future appears bright, with its offering unlimited opportunities for those interested in business computing. IBM's ongoing investment in the platform ensures that the mainframe will continue to be a vibrant and integral part of the ever-evolving IT landscape.
Here is the full version of the Mainframe story:
Learn more about the history of mainframes through the virtual mainframe exhibition at the Computer History Museum below.
Understanding the DD Statement
The DD (Data Definition) Statement defines the datasets used in the job. It links the program to the resources it needs, such as input files or temporary storage.
Syntax
//DDNAME DD DSN=data-set-name,DISP=disposition[,parameters]
Parameters
DDNAME: The name used within the program to reference the dataset.
DSN: The name of the dataset.
DISP: Specifies how the dataset is handle.
SHR: Opens an existing dataset for shared access.
NEW: Creates a new dataset.
OLD: Indicates that the dataset already exists and will be overwritten during the job step. Additionally, no other job will have access to this dataset until the current job step is complete.
DCB(Data Control Block):
LRECL: Record length of the dataset
RECFM: Record format of the dataset such as FB(Fixed Block) or VB(Variable Block)
BLKSIZE: Block size of the dataset
Example
//MYFILE DD DSN=MY.DATASET.NAME,
// DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(10,5),RLSE),
// UNIT=SYSDA,
// DCB=(RECFM=FB,LRECL=80,BLKSIZE=800,DSORG=PO)
Explanation:
DSN=MY.DATASET.NAME: Specifies the name of the dataset.
DISP=(NEW, CATLG,DELETE): Creates a new dataset, catalogs it on success, and deletes it on failure.
SPACE=(CYL,(10,5),RLSE): Allocates 10 cylinders initially, with 5 more as needed, and releases unused space.
UNIT=SYSDA: Uses the default system storage device.
Creating a Physical Dataset (PS)
//CREATEPS JOB (12345),CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IEFBR14
//MYPS DD DSN=MY.PS.FILE,
// DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(1,1),RLSE),
// UNIT=SYSDA,
// DCB=(RECFM=FB,LRECL=80,BLKSIZE=800,DSORG=PS)
Explanation:
JOB Statement:
Introduces the job as CREATEPS
CLASS=A specifies the priority.
MSGCLASS=X sends logs to a specific output class.
MOD: Specifies that the dataset already exists and allows new records to be appended to it, without overwriting the existing records.
CATLG: The dataset is retained with an entry in the system catalog
UNCATLG: Dataset is deleted from user and system catalog
DELETE: dataset is deleted from user and system catalog
PASS:Used for only normal end this is used when the dataset is to be passed and processed by the next job step in a jcl
DSORG: Dataset organization
SPACE: Specifies the space required for the dataset,
RLSE: Releases unused allocated space.
UNIT: Specifies what type of storage to use for the dataset. DASD(Direct Access Storage Device) or SYSDA
SYSOUT: Directs output to a specific location, such as the spool or printer.
SYSOUT=*: Sends output to the default spool
SYSOUT=X: Sends output to a specific output class
DCB=(RECFM=FB,LRECL=80, BLKSIZE=800,DSORG=PO):
Records are fixed length.
Each record is 80 bytes long.
Each block contains 800 bytes of data.
DSORG=PO: Specifies a partitioned dataset (PDS).
NOTIFY=&SYSUID notifies the job submitter upon completion.
EXEC Statement:
Runs the IEFBR14 utility, which is a "dummy" program often used to allocate or delete datasets.
DD Statement:
DSN=MY.PS.FILE: Names the dataset.
DISP=(NEW,CATLG,DELETE): Creates and catalogs the dataset if successful; deletes it if the job fails.
SPACE=(CYL,(1,1),RLSE): Allocates 1 cylinder initially, with 1 as additional, and releases unused space.
UNIT=SYSDA: Assigns the default system storage device.
DCB=(RECFM=FB,LRECL=80,BLKSIZE=800,DSORG=PS): Specifies fixed-length records (80 bytes per record and 800 bytes per block).
DSORG=PS: Indicates that the dataset is a Physical Sequential (PS) dataset.
Understanding the JOB Statement
The JOB Statement is always the first line in a JCL script. It provides high-level details about the job, such as its name, priority, and notification.
Syntax
//JOBNAME JOB (accounting-info),CLASS=x,MSGCLASS=x,NOTIFY=&SYSUID
Parameters Explained
JOBNAME: The unique name of the job.
Example: PAYROLL1.
Accounting Info: Used for tracking resource usage.
Example: (12345,67890).
CLASS=x: Determines the priority and resource allocation.
Example: CLASS=A.
MSGCLASS=x: Specifies where system logs are sent.
Example: MSGCLASS=X.
NOTIFY=&SYSUID: Sends a message to the job submitter upon completion.
Understanding Libraries in JCL
In JCL, libraries refer to storage locations where datasets, programs, or procedures are stored. They are critical in mainframe environments, as they allow organized access to frequently used resources like executable programs, reusable procedures, or dataset definitions. Libraries ensure efficiency, reusability, and proper organization of system resources.
To successfully run the program that you specify on a JCL, EXEC statement z/OS has to search for and find that program. Using JOBLIB or STEPLIB statements can reduce search time.
When you code the PGM parameter, z/OS looks for a program, and will automatically search standard system program libraries, such as SYS1.LINKLIB, which contains IBM-supplied programs.
If the program you want to run resides in a private program library, you must specify either a JOBLIB statement or a STEPLIB statement for z/OS to successfully locate the program.
JOBLIB (Program Library for All Steps in a Job)
Specifies the library where the system should search for programs used in all steps of the job.
JOBLIB applies to every step in the job. This is useful when multiple steps use programs stored in the same library
Example
//MYJOB JOB (123),CLASS=A,MSGCLASS=X
//JOBLIB DD DSN=MY.LOAD.LIB,DISP=SHR
//STEP1 EXEC PGM=PROGRAM1
//STEP2 EXEC PGM=PROGRAM2
Explanation:
JOBLIB: Directs the system to MY.LOAD.LIB to locate the programs PROGRAM1 and PROGRAM2 for both steps.
STEPLIB (Program Library for a Specific Step)
Specifies the library to search for programs for a single step within a job.
Overrides JOBLIB and applies only to the step where it is defined.
Example:
//STEP1 EXEC PGM=PROGRAM1
//STEPLIB DD DSN=MY.LOAD.LIB,DISP=SHR
Explanation:
STEPLIB: Directs the system to search for PROGRAM1 in MY.LOAD.LIB.
JCLLIB (Procedure Library for Cataloged Procedures)
The JCLLIB statement specifies the location of cataloged procedures.
If the cataloged procedure resides in a specific dataset library, the JCLLIB statement ensures that the system searches that location to find the procedure.
Example:
//MYJOB JOB (123),'Example Job',CLASS=A,MSGCLASS=X
//JCLLIB ORDER=MY.PROC.LIB
//STEP1 EXEC PROC=MYPROC
Explanation:
JCLLIB ORDER=MY.PROC.LIB: Tells the system to look in MY.PROC.LIB for the cataloged procedure MYPROC.
Conditional statements in JCL let your job “decide” which steps to run, based on what happened in previous steps. It’s like telling your JCL: "If the last step failed, do this; otherwise, do that."
This is useful for handling errors or making your job more flexible, without having to run everything every time.
Example:
//STEP1 EXEC PGM=IGYCRCTL //* Compile the program
//STEP2 EXEC PGM=TESTPROG //* Run tests
//IF (STEP1.RC = 0 AND STEP2.RC = 0) THEN
//STEP3 EXEC PGM=COPYOUT //* Copy output
//ELSE
//STEP4 EXEC PGM=PRTERRPT //* Print error report if failed
//ENDIF Explanation:
STEP1: Try to compile program.
STEP2: Run tests.
IF: If both STEP1 and STEP2 ended with a return code (RC) of zero, then do STEP3.
ELSE: If not, then do STEP4.
ENDIF: End the conditional block.
When is this used?
Error Handling: Skip steps if a previous step failed.
Branching Logic: Decide what to do next based on the previous result.
Understand how to secure data and facilitate compliance in the mainframe world.
Many uncritically believe that the mainframe z/OS system is inherently secure without additional attention or effort. In reality, it's more accurate to say that it is the most secure platform but is the most secure platform only when appropriately managed.
Watch this video below that introduces three common myths about the mainframe, using the following guiding questions to assist your comprehension:
For the security of mainframe, why is it emphasized that despite the mainframe's secure foundation, applying and monitoring controls are crucial for maintaining genuine data protection?
In terms of mainframe data accessibility, in what ways has the role of the mainframe evolved in the modern data center, challenging the notion that mainframe data stays isolated?
Regarding the assumption about mainframe security, how do the mainframe's changing role and increased integration capabilities challenge the assumption of complete security, leading to potential organizational complacency and increased risk?
The process of connecting to the mainframe
Interacting with IBM z/OS using the TN3270 terminal remains a staple for mainframe professionals, despite the availability of modern methods like code editors. Typically, you begin by opening the TN3270 client to connect to TSO/E, the interface that enables multiple users to interact with the mainframe simultaneously. After logging in with your credentials, you access ISPF (Interactive System Productivity Facility), a menu-driven interface that allows you to navigate and manage various components of z/OS. Through ISPF panels, you can edit code, submit jobs, manage datasets, and monitor system activities. Upon the completion of your tasks, you can log off from TSO/E to end your session and disconnect from the mainframe. While newer tools offer more user-friendly features, the TN3270 terminal is a reliable and essential method for direct access to system resources.
3270 and TN3270 are key interfaces for connecting to z/OS. TN3270, or Telnet 3270, is emulator software that allows workstations to connect and log into z/OS systems. Understanding TN3270 is essential for z/OS users, as it emulates the functionality of the original 3270 display devices introduced in 1971. The 3270 devices used a character data stream protocol, which was designed for efficient data entry and retrieval, compensating for the limited network capabilities of the era. This protocol's efficiency continues to be valuable today, especially when fast interactive response times are needed on z/OS.
While TN3270 brings the 3270 interface to modern workstations, it does have some quirks, particularly around keyboard navigation. The original 3270 had a unique keyboard layout, and as keyboards evolved, differences such as the location of the 'Enter' key arose. In TN3270 emulators, the 'Enter' command (hex X'7D') is often mapped to the right 'Ctrl' key rather than the labeled 'Enter' key, but most emulators allow customization of these key mappings to fit user preferences.
TN3270 emulators offer various configuration options, including screen sizes, fonts, colors, and code pages, making them flexible tools for connecting to z/OS. They are used daily by support technicians, developers, and many back-office personnel for business transactions and production applications. The ability to customize login screens and cursor placements enhances user interaction with z/OS applications. TN3270’s speed, reliability, and customization options make it an indispensable tool for accessing z/OS environments, both now and in the future.
Watch the following video to learn 3270 and TN3270:
TSO is a command-line interface that’s an integral part of the z/OS operating system, offering a text-based way to interact with the system. While it may remind you of the DOS command prompt in Windows, TSO is far more powerful and is specifically tailored for mainframe operations.
Although TSO itself is text-based and does not support full-screen interactions, it’s commonly used alongside ISPF (Interactive System Productivity Facility), which enables full-screen applications for a more interactive user experience. TSO also plays a critical role behind the scenes, handling background tasks such as processing commands, scripts, and batch jobs.
Watch the following video to learn TSO/E in detail:
You might also find the introduction happening in 3270 helpful to match the concept to the actual practice:
Interactive System Productivity Facility (ISPF) is a multi-faceted development tool set for IBM Z®. It provides host-based software development, including software configuration management. Watch this video to get a brief idea about ISPF:
This short video provides a more details introduction to ISPF:
Review Technology Training that is important to the path - z/OS Fundamentals, TSO / ISPF, JCL, Asset Management, etc.
As of now we already know what a mainframe is, as a next step we have prepared a section for those of you that are looking for guidance about mainframe architecture and software. The objective of this session is to share with you the initial information about the mainframe components, architecture foundational concepts, how and from where you should start.
Before proceeding further, let's understand more about the components and architecture of mainframe machines:
The part of computer that contains the sequencing and processing for:
Instruction execution: Fetching instructions from memory and decoding these instructions to understand the required action and then executes the decoded instructions using its arithmetic and logic unit (ALU).
Initial program load: During IPL, the CP loads the operating system into the main memory and starts various system services for the smooth working and operation of mainframe.
Others machine operations.
Central storage is the primary storage located within the processor complex. It is directly accessible by the Central Processing Unit (CPU) for executing instructions and processing data.
Auxiliary storage, external to the processor and includes various types of non-volatile storage media.
Mainframes are typically divided into Logical Partitions (LPARs), which are subsets of the computer's hardware resources virtualized as separate computers with its own set of hardware resources, such as CPU, memory, and I/O channels. A physical machine can be partitioned into multiple LPARs, each housing a separate operating system. For example, one LPAR might run z/OS while other runs z/VM.
Hardware Master Console (HMC): Controls mainframe hardware .
Manages logical partitions (LPARs).
Controls system initialization, configuration, and operational tasks.
Operator console: Controls and operates z/OS operating systems.
Software that controls the running of programs; in addition, operating system may also provide services like resource allocations, scheduling and data management.
z/OS: The primary operating system for IBM mainframes, designed for high availability, security, and scalability.
z/VM: A hypervisor (virtual machine monitor) that allows multiple virtual machines to run on a single mainframe where each VM can run its own operating system.
z/VSE: An operating system for small and medium-sized mainframe environments used for batch processing and transaction processing.
An IBM transaction processing facility for processing messages to/from web services.
Software that provides access control and auditing functionality for z/OS and z/VM operating systems.
Scripting language for defining and controlling batch jobs on z/OS. In mainframe systems, "batch" refers to a method of processing set of tasks or programs without requiring user interaction in real-time.
An operating system facility to enable users to share computer time and resources.
ISPF (Interactive System Productivity Facility):
Provides a better interface for managing z/OS resources, dataset management, and job submission. It offers a more user-friendly interface compared to the command-line interface of TSO.
It was an overview to the components and technology used in mainframe systems before we could move to next part and dive deeper into it.
An introductory overview of more than 30 key terms used in mainframe management.
Ready to embark on the journey of becoming a mainframer? Fantastic! To kickstart your exploration, let's begin by unraveling the language of the mainframe world.
In the video below, titled "Talk Like a Mainframer," we'll guide you through 33 popular terms that seasoned mainframers use in their daily discourse. Whether you're a budding enthusiast or a seasoned pro, understanding this unique lexicon is key to navigating the intricate landscape of mainframe technology.
For your reference, here are the terms mentioned in the video above:
z/OS: A widely used operating system for IBM mainframe computers that uses 64-bit central storage. A descendant of the IBM 360, 370 XA, and 390 ESA MVS which were 24 and 32-bit operating systems.
Abend: Term for abnormal ends associated with a job or task. Not a crisis; return codes or reason codes aid troubleshooting.
Syslog (System Log): Contains dumps and messages on consoles.
Job Log: Provides information before and after an abnormal end, aiding analysis.
ACK: acknowledgment field. Shorthand confirmation is commonly used in developer conversations.
APARs (Authorized program analysis reports): Identifies bugs and progresses through temporary fixes to the final Program Temporary Fix (PTF).
Batch Jobs: Automated tasks, often scheduled during low-usage times for activities like report generation.
BCP (Base Control Program): Core of z/OS, residing in the central electronic complex, aka the "Box" or "Keck."
CICS (Customer Information Control System). Middleware subsystem optimizing high transaction volumes.
CEC (Central Electronic Complex): Houses core mainframe hardware components, often referred to as the "Keck" or "Box."
Sysprogs: A systems programmer. Manage the mainframe, handling tasks like installing products, dealing with JCL, and ensuring smooth operations.
Concurrent Upgrades: Allow dynamic updates without rebooting.
Coupling Facility: Ensures data consistency across parallel systems.
IMS (Information Management System): Legacy subsystem.
ESM (External Security Manager): Handles security for products, including ACF2, Top Secret, and RACF.
Fiber Channel: Communication channel.
OSA (Open Systems Adapter): a Network Interface Card (NIC)
GDPS (Global Data Processing Services): High-end solution for disaster recovery.
Parallel Sysflex: Facilitates parallel systems, often referred to as a "Plex."
HMC (Hardware Management Console): Desktop gateway to the mainframe.
ISPF (Interactive System Productivity Facility): Provides a terminal interface for system management.
JCL (Job Control Language): Simplifies job creation.
LPARs (Logical Partitions): Allow logical segmentation of a single mainframe into multiple systems.
LIC (Licensed Internal Code): Mainframe firmware.
MSUs (Millions of Service Units): Measure processing capacity.
MIPS (Million Instructions Per Second): Measure processing capacity.
PAX Files: Essential for installing products.
SMP (System Modification Program): Controls changes to the operating system.
SMF Records: Record types, particularly "Type 89" for SCRT (Sub-Capacity Reporting Tool).
SCRT (Sub-Capacity Reporting Tool): Crucial for usage reporting.
Type 89 Record: Part of SMF Records, specifically for SCRT.
z/VM: z Virtual Machine used to run other z/VMs or operating systems.
SAF (Security Access Facility): Standard universal security API in z/Systems to enable security calls to the ESM access control products.
RACROUTE: Standard security program instruction used to invoke the SAF API security calls to the ESM(s).
Here is a brief introduction to What is z/OS
Dr. Philipp Brune gave a detailed elaboration about what is z/OS and how it works
In this video, you will learn about the hardware in a typical z/OS Data Center.
To get a comprehensive understanding on z/OS, we strongly recommend you take the IBM Redbooks z/OS Introduction course:
What Are Instream Procedures?
An in-stream procedure is written directly within the same JCL member where it is executed. It starts with a PROC statement and ends with a PEND statement. These procedures consist only of a limited set of JCL statements.
An in-stream procedure may include: CNTL,COMMENT,DD,END CNTL,EXEC,IF/THEN/ELSE/ENDIF,INCLUDE,SET
Written directly in the JCL between the PROC and PEND statements.
Must be defined before the step that executes the procedure.
Limited to the job in which it is defined
Example
//MYJOB JOB (12345),CLASS=A,MSGCLASS=X
//MYPROC PROC
//STEP1 EXEC PGM=IEFBR14
//STEP2 EXEC PGM=PROGRAM2
//PEND
//STEP3 EXEC PROC=MYPROC
Explanation
MYPROC - PROC: The instream procedure defined in the same JCL file.
PEND: Marks the end of the procedure.
EXEC PROC=MYPROC: Invokes the procedure in Step 3.
What Are Cataloged Procedures?
When the procedure is separated out from the JCL and coded in a different data set, it is called a Cataloged Procedure. A PROC statement is not mandatory to be coded in a cataloged procedure
Example
Suppose MYPROC is a cataloged procedure stored in a library. The JCL to invoke it is:
//MYJOB JOB (12345),CLASS=A,MSGCLASS=X
//JCLLIB ORDER=MY.PROC.LIB
//STEP1 EXEC PROC=MYPROC
Procedure Definition (Stored in MY.PROC.LIB)
//MYPROC PROC
//STEP1 EXEC PGM=IEFBR14
//STEP2 EXEC PGM=PROGRAM2
Explanation
JCLLIB ORDER=MY.PROC.LIB: Points to the library where the cataloged procedure is stored.
EXEC PROC=MYPROC: Executes the predefined steps in the cataloged procedure MYPROC
What Are Symbolic Parameters?
Symbolic parameters allow us to pass information from JCL to procedures (PROCs), making PROCs reusable with different values when calling from different JCLs.
They allow us to pass variable information from JCLs to procedures (PROCs) without changing the actual procedures (PROCs) code. This makes it easier to manage jobs and procedures that need to run with different parameters in different situations.
Symbolic parameters are placeholders or variables used in PROCs. They receive variable data from JCL, which replaces symbolic variables in PROC. Symbolic parameters can be defined and used in the same JCL.
This procedure is stored in a library (e.g., MY.PROC.LIB) and uses symbolic parameters for flexibility.
//COPYPROC PROC FILENAME=DEFAULT.DATA
//STEP1 EXEC PGM=IEBGENER
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DSN=&FILENAME,DISP=SHR
//SYSUT2 DD DSN=OUTPUT.FILE,DISP=(NEW,CATLG,DELETE),
// SPACE=(TRK,(5,5)),UNIT=SYSDA
JCL Invoking the Procedure
This job invokes the procedure COPYPROC and passes a specific value for the symbolic parameter FILENAME.
//MYJOB JOB (123),CLASS=A,MSGCLASS=X
//JCLLIB ORDER=MY.PROC.LIB
//STEP1 EXEC PROC=COPYPROC,FILENAME=MY.INPUT.DATA
Explanation
In the Procedure:
&FILENAME is a symbolic parameter with the default value DEFAULT.DATA.
In the Job:
When invoking COPYPROC, the value MY.INPUT.DATA is passed for &FILENAME.
//SYSUT1 DD DSN=MY.INPUT.DATA,DISP=SHR
//SYSUT2 DD DSN=OUTPUT.FILE,DISP=(NEW,CATLG,DELETE),
// SPACE=(TRK,(5,5)),UNIT=SYSDA
IEBCOMPR
IEBCOMPR is used to compare a PS, PDS, PDSE dataset
It is useful for verifying the backups are done correctly
PS datasets are considered equal if the dataset contain the same number of records and the corresponding records are identical
Auxiliary Storage Devices.
This replaces the placeholder &FILENAME in the procedure.
Final Output: During execution, the following statements are generated:
Selective copy in IEBCOPY allows you to copy only specific members from a source PDS to a target PDS, instead of copying the entire dataset. This is useful when you need only a few members and want to avoid duplicating everything. you can achieve this by using the (SELECT MEMBER=xxx) statements in the SYSIN section.
EXAMPLE:
Selective Member Copy from PDS
//IEBCOPY JOB (12345),'SELECTIVE COPY',CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IEBCOPY
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DSN=T12345.SOURCE.PDS,DISP=SHR
//SYSUT2 DD DSN=T12345.TARGET.PDS,DISP=SHR
//SYSIN DD *
COPY OUTDD=SYSUT2,
INDD=SYSUT1
SELECT MEMBER=PROG1,PROG2
/* Explanation:
EXEC Statement: Executes the IEBCOPY utility to copy specific members.
DD Statement:
SYSUT1 DD: Specifies the input PDS (T12345.SOURCE.PDS) that contains the members to be copied.
SYSUT2 DD: Specifies the output PDS (T12345.TARGET.PDS) where the selected members will be copied.
SYSPRINT DD: Sends operation logs and messages to the system output (SYSOUT). SYSPRINT DD
SYSIN DD: Contains the control statements:
COPY INDD=SYSUT1, OUTDD=SYSUT2: tells IEBCOPY to perform the copy operation.
SELECT MEMBER=PROG1,PROG2 : specify which members to copy from the source PDS.
This program selectively copies only the PROG1 and PROG2 members from the source PDS to the target PDS, making the operation efficient when full dataset copying is not needed.
EXAMPLE:
Copying a Sequential Dataset into a PDS Member
//IEBGENER JOB (12345), CLASS=A, MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IEBGENER
//SYSUT1 DD DSN=T12345.FILE1.PS, DISP=SHR
//SYSUT2 DD DSN=T12345.FILE2.PDS, DISP=SHR
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//SYSIN DD DD*
GENERATE MAXNAME=1
MEMBER NAME = MEMBER1
//Explanation:
EXEC Statement: Executes the IEBGENER utility to copy data.
DD Statements:
SYSUT1 DD: Specifies the input dataset T12345.FILE1.PS, which is a sequential dataset.
SYSUT2 DD: Specifies the output dataset as a partitioned dataset (PDS), T12345.FILE2.PDS, where a new member will be created.
SYSPRINT DD: Directs informational messages or logs to system output (SYSOUT=*).
SYSOUT DD: Handles general system output, directed to the default system output (SYSOUT=*).
SYSIN DD: Uses in-stream data to specify the generation of a new member in the PDS with the following parameters:
GENERATE MAXNAME=1: Specifies that one member will be generated.
MEMBER NAME=MEMBER1: Creates a member named (MEMBER1) in the partitioned dataset (T12345.FILE2.PDS).
This program uses the IEBGENER utility to copy data from a sequential dataset (T12345.FILE1.PS) into a newly generated member (MEMBER1) within the partitioned dataset (T12345.FILE2.PDS). The (GENERATE) keyword within SYSIN ensures that the member is created with the specified name.
PDS, PDSE datasets are considered equal if both contain same number of members, the same number of records and the corresponding records are identical
The data sets being compared must have same record length and format
Return code 0 if the files are identical
Return code 8 if the files are not identical
EXAMPLE:
Comparing two sequential data sets:
Explanation:
EXEC Statement:
Executes the IEBCOMPR program, which compares two datasets.
DD Statements:
SYSUT1 DD:
DSN=T12345.FILE1.PS: Name of the first dataset to compare.
SYSUT2 DD:
DSN=T12345.FILE2.PS: Name of the second dataset to compare.
SYSIN DD:
COMPARE TYPORG=PS: Specifies the comparison type as PS (physical sequential dataset).
If COMPARE TYPORG=PO: Then it specifies the comparison type is PO (partitioned data sets)
If no difference found in SYSUT1 and SYSUT2 then return code is 0 if difference is found then return code is 8
//IEBCOMPR JOB (12345),CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IEBCOMPR
//SYSUT1 DD DSN=T12345.FILE1.PS, DISP=SHR
//SYSUT2 DD DSN=T12345.FILE2.PS.DISP=SHR
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//SYSIN DD *
COMPARE TYPORG=PS
/*
//EXAMPLE 1:
Copying a PDS Member to a Sequential Dataset using IEBGENER
//IEBGENER JOB (12345),CLASS=A, MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IEBGENER
//SYSUT1 DD DSN=T12345.FILE1.PDS(MEMBER1), DISP=SHR
//SYSUT2 DD DSN=T12345.FILE2.PS, DISP=SHR
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//SYSIN DD DUMMYExplanation:
EXEC Statement:
Executes the IEBGENER utility to copy data.
DD Statements:
SYSUT1 DD:
DSN=T12345.FILE1.PDS(MEMBER1): Specifies the input file as a PDS member (MEMBER1) from the PDS dataset (T12345.FILE1.PDS).
SYSUT2 DD:
EXAMPLE 2:
Copying a Sequential Dataset to a PDS Member
Explanation:
EXEC Statement:
Executes the IEBGENER utility to copy data.
DD Statements:
SYSUT1 DD:
IEBCOPY is used to copy, merge, and compress Partitioned Datasets (PDS) or Partitioned Dataset Extended (PDSE).
It is primarily used for copying members between PDS/PDSE datasets and for creating backups of partitioned datasets.
IEBCOPY can copy an entire PDS/PDSE, selected members, or compress a PDS by removing unused space
Lists the number of unused directory blocks for efficient dataset utilization.
EXAMPLE:
Copy from one PDS to another PDS
EXEC Statement: Executes the IEBCOPY utility to copy members.
DD Statement:
SYSUT1 DD: Specifies the input PDS (T12345.SOURCE.PDS) that contains the members to be copied.
SYSUT2 DD: Specifies the output PDS (T12345.TARGET.PDS) where the members will be copied to.
SYSPRINT DD: Sends the copy operation log and messages to SYSOUT.
SYSIN DD: Contains control statements. COPY INDD=SYSUT1,OUTDD=SYSUT2, tells IEBCOPY to copy all members from the input PDS to the output PDS
This IEBCOPY JCL program copies all members from one Partitioned Dataset (PDS) (T12345.SOURCE.PDS) to another PDS (T12345.TARGET.PDS).
Utilities are simple programs or pre-written programs which perform commonly needed functions
Utilities are widely used by system programmers and application developers to achieve day-to-day requirements, organizing and maintaining data.
There are two types of utility they are dataset utilities and system utilities
Dataset utilities:
Used to perform the task on the dataset
Name start with IEB
IEBCOPY, IEBEDIT, IEBCOMPR, IEBGENER are the few dataset utilities
System utilities:
Used to perform the task on system Maintenace
Name starts with IEH
IEHMOVE, IEHLIST, IEHPROGM are the few System utilities
Common JCL DD names:
SYSUT1: Input file
SYSUT2: Output file
SYSUT3: Work file for input (SYSUT1)
SYSUT4: Work file for output (SYSUT2)
COND=(0, EQ)
This tells JCL: If the previous step’s return code (RC) was zero (no errors), skip this step.
Example:
Explanation:
STEP1: Try to compile program.
STEP2: Run tests.
STEP3: Copies the output but only if compilation and tests worked
STEP4: Prints an error report if STEP3 failed (RC not 0). If STEP3 worked (RC=0), STEP4 is skipped.
COND=EVEN
This tells JCL: Run this step even if something above failed or abended.
Example:
Explanation:
STEP1: Try to compile program
STEP2: Runs tests.
STEP3: Copies output.
STEP4: The error report always runs, even if a previous step abend or failure.
COND=ONLY
This tells JCL: Run this step only if something above failed or abended.
Example:
Explanation:
STEP1: Try to compile program
STEP2: Runs tests.
STEP3: Copies output.
STEP4: Runs only if any previous step (STEP1, STEP2, STEP3) failed or abended.
COND=(4095,LT)
This tells JCL: If the previous step’s return code (RC) was less than 4095, skip this step. Since the maximum possible RC is 4095, this condition means the step will never be skipped, and so this step will almost always execute
Example:
Explanation:
STEP1: Try to compile the program.
STEP2: Run tests.
STEP3: Copy the output.
STEP4: Prints an error report every time, because RC from STEP3 will always be less than or equal to 4095, so the skip condition is never met.
COND=(4, EQ, STEP2)
This tells JCL: If STEP2’s return code (RC) was 4 (a warning), SKIP this step.
Example:
Explanation:
STEP1: Try to compile the program.
STEP2: Run tests.
STEP3: Copy the output.
STEP4: Prints an error report if STEP2 did NOT finish with RC=4.
IEBGENER is used to copy a record from a sequential dataset or convert a dataset from PS to PDS or PDS to PS dataset
Copy operation can be performed on all types of records having different various record length
IEBGENER can use either a Physical Sequential (PS) dataset, a Partitioned Dataset (PDS) member, or a Partitioned Dataset Extended (PDSE) member as input and a new or existing PS file or PDS or PDSE member as an output
EXAMPLE:
Copying Data Between Two Sequential Datasets:
//IEBGENER JOB (12345),CLASS=A, MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IEBGENER
//SYSUT1 DD DSN=T12345.FILE1.PS, DISP=SHR
//SYSUT2 DD DSN=T12345.FILE2.PS, DISP=SHR
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//SYSIN DD DUMMYExplanation:
EXEC Statement:
Executes the IEBGENER program, which compares two datasets.
DD Statements:
SYSUT1 DD:
DSN=T12345.FILE1.PS: Specifies the name of the input dataset.
SYSUT2 DD:
DSN=T12345.FILE2.PS: Specifies the name of the output dataset.
This program uses the IEBGENER utility to copy data from the input dataset (T12345.FILE1.PS) to the output dataset (T12345.FILE2.PS). It reads the data from SYSUT1 and writes it to SYSUT2
EXAMPLE:
Renaming a Member While Copying
Explanation:
EXEC Statement: Executes the IEBCOPY utility program to copy and rename members in a Partitioned Dataset (PDS).
DD Statement:
SYSUT1 DD: Specifies the source PDS (T12345.SOURCE.PDS) that contains the original member (OLDNAME).
SYSUT2 DD: Specifies the target PDS (T12345.TARGET.PDS) where the renamed member (NEWNAME) will be stored.
SYSPRINT DD: Outputs logs and messages generated by IEBCOPY to the system output.
SYSIN DD: Contains the control statements for the copy operation:
EXAMPLE:
Copying a UNIX System Services (USS) File to a Sequential Dataset
Explanation:
EXEC Statement: Executes the IEBGENER utility, which is used for copying datasets.
DD Statements:
//STEP1 EXEC PGM=IGYCRCTL //* Compile the program
//STEP2 EXEC PGM=TESTPROG //* Run tests
//STEP3 EXEC PGM=COPYOUT //* Copy output if tests pass
//STEP4 EXEC PGM=PRTERRPT, COND=(0,EQ) //* Print error report only if STEP3 failedSYSIN: Used to pass parameters for the utility
SYSOUT: Output file for message from utility
SYSPRINT: Output file printed output from utility
SYSUDUMP: It's an output file for a system dump if program fails
SYSPRINT DD:
Directs the output and informational messages to the standard output (SYSOUT=*).
SYSOUT DD:
Also directs general system output to SYSOUT=* (standard system output).
SYSPRINT DD:
Directs messages and logs to the system output (SYSOUT=*).
SYSOUT DD:
Directs general system output to standard system output (SYSOUT=*).
This program uses IEBGENER to copy a specific member (MEMBER1) from a partitioned dataset (T12345.FILE1.PDS) to a sequential dataset (T12345.FILE2.PS). It reads the content of the PDS member from SYSUT1 and writes it to the sequential dataset specified in SYSUT2
DSN=T12345.FILE1.PS: Specifies the name of the input dataset (T12345.FILE1.PS), which is a sequential dataset (PS).
SYSUT2 DD:
DSN=T12345.FILE2.PDS(MEMBER1): Specifies the output location as a PDS member (MEMBER1) in the partitioned dataset (T12345.FILE2.PDS).
SYSPRINT DD:
Directs informational messages and logs to system output (SYSOUT=*).
SYSOUT DD:
Handles general system output, also directed to the default system output (SYSOUT=*)
This program uses the IEBGENER utility to copy data from a sequential dataset (T12345.FILE1.PS) to a member (MEMBER1) of a partitioned dataset (T12345.FILE2.PDS). It reads the content from the sequential dataset specified in SYSUT1 and writes it into the specified PDS member in SYSUT2
//IEBCOPY JOB (12345),CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IEBCOPY
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DSN=T12345.SOURCE.PDS,DISP=SHR
//SYSUT2 DD DSN=T12345.TARGET.PDS,DISP=SHR
//SYSIN DD *
COPY INDD=SYSUT1,
OUTDD=SYSUT2
/*
If STEP2 returned RC=4 (just a warning), STEP4 is skipped and no error report is generated.
If STEP2 returned any other code (0, 8, 12,..), STEP4 runs and prints the error report
COPY OUTDD=SYSUT2, INDD=SYSUT1: Tells IEBCOPY to copy data from SYSUT1 to SYSUT2.
SELECT MEMBER=((OLDNAME, NEWNAME)): Instructs IEBCOPY to copy the member (OLDNAME) from the source PDS and rename it as (NEWNAME) in the target PDS.
This JCL copies a member named OLDNAME from the source PDS (T12345.SOURCE.PDS) and saves it into the target PDS (T12345.TARGET.PDS) with a new name NEWNAME
//IEBCOPY JOB (12345),'RENAME COPY',CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IEBCOPY
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DSN=T12345.SOURCE.PDS,DISP=SHR
//SYSUT2 DD DSN=T12345.TARGET.PDS,DISP=SHR
//SYSIN DD *
COPY OUTDD=SYSUT2,
INDD=SYSUT1
SELECT MEMBER=((OLDNAME,NEWNAME))
/* SYSUT1 DD: Defines the input source as a UNIX file located at /mth9/input1/transfer.mon. The file data is defined as TEXT, meaning the file will be read as text, and the file will be accessed in a read-only mode (ORDONLY)
SYSUT2 DD: Specifies the output dataset as a physical sequential (PS) dataset T12345.FILE2.PS
SYSPRINT DD: Directs informational messages and logs to the system output (SYSOUT=*), which is the standard output.
SYSIN DD: Uses DUMMY, meaning that no input control statements are needed for this operation.
This job copies the contents of a UNIX file (/mth9/input1/transfer.mon) to an existing PS dataset (T12345.FILE2.PS).
//IEBGENER JOB (12345), CLASS=A, MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IEBGENER
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD PATH='/mth9/input1/transfer.mon',
// FILEDATA=TEXT,PATHOPTS=ORDONLY
//SYSUT2 DD DSN=T12345.FILE2.PS, DISP=SHR
//SYSPRINT DD SYSOUT=*
//SYSIN DD DUMMYEXAMPLE:
Exclude Members While Copying PDS
//IEBCOPY JOB (12345),CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IEBCOPY
//SYSPRINT DD SYSOUT=*
//SYSUT1 DD DSN=T12345.SOURCE.PDS,DISP=SHR
//SYSUT2 DD DSN=T12345.TARGET.PDS,DISP=SHR
//SYSIN DD *
COPY OUTDD=SYSUT2,
INDD=SYSUT1
EXCLUDE MEMBER=TEMP1,OLDPGM
/* Explanation:
EXEC Statement: Executes the IEBCOPY utility to copy members from one PDS to another, excluding specific ones.
DD Statement:
SYSUT1 DD: Specifies the input PDS (T12345.SOURCE.PDS) that contains all the members.
SYSUT2 DD: Specifies the output PDS (T12345.TARGET.PDS) where the members will be copied, except the excluded ones.
SYSPRINT DD: Outputs logs and messages related to the copy process.
SYSIN DD: Contains the control statements:
COPY INDD=SYSUT1,OUTDD=SYSUT2: initiates the copy operation.
EXCLUDE MEMBER=TEMP ,OLDPGM: tell IEBCOPY to skip copying these specific members.
This JCL copies all members from the source PDS to the target PDS except the ones explicitly excluded (TEMP1 and OLDPGM). It’s helpful when you want to copy almost everything but leave out outdated or unnecessary members
Generation Dataset Group base
A GDG base is a catalog entry that defines the common part of the name for all generation datasets.
It doesn't store any data itself but controls how generations are created and managed
It sets the rules like how many generations can exist and what happens when the limit is reached
Before using a gdg we need to create a gdg base using the IDCAMS utility. once the base is created we can start creating generations (individual datasets) under it.
EXAMPLE
Creating a GDG base with a 5-Generation Limit automatically deleting the oldest when the limit is reached
EXPLANATION:
DEFINE GDG
Command used to create a new GDG base entry in the catalog.
NAME(MYDATA.BACKUP.REPORT)
Specifies the name of the GDG base.
All generations created under this base will start with MYDATA.BACKUP.REPORT.
LIMIT (5)
Defines the maximum number of generations that can exist at the same time.
Here, up to 5 generation datasets are allowed.
NOEMPTY
When the limit is reached, only the oldest generation is deleted.
Recent generations are retained.
SCRATCH
When a generation is deleted, it is removed completely from both catalog and disk.
The dataset cannot be recovered later.
//IEBGENER JOB (12345),CLASS=A, MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IEBGENER
//SYSUT1 DD DSN=T12345.FILE1.PS, DISP=SHR
//SYSUT2 DD DSN=T12345.FILE2.PDS(MEMBER1), DISP=SHR
//SYSPRINT DD SYSOUT=*
//SYSOUT DD SYSOUT=*
//SYSIN DD DUMMY//STEP1 EXEC PGM=IGYCRCTL //* Compile the program
//STEP2 EXEC PGM=TESTPROG //* Run tests
//STEP3 EXEC PGM=COPYOUT //* Copy output
//STEP4 EXEC PGM=PRTERRPT, COND=EVEN //* Always print error report (even on abend/failure)//STEP1 EXEC PGM=IGYCRCTL //* Compile the program
//STEP2 EXEC PGM=TESTPROG //* Run tests
//STEP3 EXEC PGM=COPYOUT //* Copy output
//STEP4 EXEC PGM=PRTERRPT, COND=ONLY //* Print error report only if previous step is failed or abended//STEP1 EXEC PGM=IGYCRCTL //* Compile the program
//STEP2 EXEC PGM=TESTPROG //* Run tests
//STEP3 EXEC PGM=COPYOUT //* Copy output
//STEP4 EXEC PGM=PRTERRPT, COND=(4095,LT) //* Error report always runs because RC never exceeds 4095//STEP1 EXEC PGM=IGYCRCTL //* Compile the program
//STEP2 EXEC PGM=TESTPROG //* Run tests
//STEP3 EXEC PGM=COPYOUT //* Copy output
//STEP4 EXEC PGM=PRTERRPT, COND=(4,EQ,STEP2) //* Print error report except for STEP2 RC=4//MYJOB JOB (12345),CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP01 EXEC PGM=IDCAMS
//SYSIN DD *
DEFINE GDG(NAME(MYDATA.BACKUP.REPORT) -
LIMIT(5) -
NOEMPTY -
SCRATCH)
/*
A GDG (Generation Data Group) is a collection of datasets that are logically related and share a common naming structure. It's commonly used to manage multiple versions of data over time
GDGs help in organizing datasets that are created on a regular basis like daily reports, backups, or job outputs making version control simple and automated
All datasets in a GDG share a common prefix, known as the GDG base.
Example: MYDATA.TEST.SAMPLE.GDG is the base name
Each dataset in the group is named using the base followed by a generation and version number. Example:
MYDATA.TEST.SAMPLE.GDG.G0001V00
MYDATA.TEST.SAMPLE.GDG.G0002V00
A GDG can have up to 255 generations at one time. When the limit is reached, older generations can be automatically deleted, based on how the GDG is defined (e.g., with a limit and a scratch or no-scratch option).
GDG datasets are usually sequential (PS), but they can also be partitioned (PDS)
All generations under a GDG must have the same dataset attributes, such as record format, record length, and dataset organization.
RULES
All generations within a GDG must share the same attributes. This includes DCB parameters like record format and record length, ensuring consistency across the group
A GDG can have up to 255 generations at a time. If you reach this limit, older generations may be automatically deleted, depending on how the GDG is set up.
Both DSN (the name of the dataset) and UNIT (where it should be stored) are required when allocating a new generation
When you're creating a new generation, make sure to set the DISP parameter to (NEW, CATLG, ...) in your JCL. This tells the system to treat it as a brand-new dataset and add it to the catalog.
USES
GDGs make it easy to manage files that are created on a schedule daily, weekly, monthly, or even yearly. Each version is stored neatly under the same base name, making everything easier to track.
GDGs automatically handle file versions for you. This reduces manual effort and prevents mistakes in using the wrong dataset
You can set a limit on how many generations to keep. When that limit is reached, older versions are automatically removed. This helps manage disk space without manual cleanup
GDG GENERATION
A generation is an individual dataset that belongs to a Generation Data Group (GDG).
Each time you create a new version of the same kind of data (for example, a daily report or a backup file) it becomes a new generation within the GDG
Generations are automatically named by the system with a generation number and a version number at the end like: MYDATA.BACKUP.REPORT.G0001V00, MYDATA.BACKUP.REPORT.G0002V00
All generations share the same GDG base name (MYDATA.BACKUP.REPORT in this example) but have different unique suffixes (GxxxxVxx)
If you run a backup job every night, each night's backup will be stored as a new generation under the same GDG base
EXAMPLE
//CRTEGEN JOB (12345),CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=IEFBR14
//MYGDG DD DSN=MYDATA.BACKUP.REPORT(+1),
// DISP=(NEW,CATLG,DELETE),
// SPACE=(CYL,(1,1),RLSE),
// UNIT=SYSDA,
// DCB=(RECFM=FB,LRECL=80,BLKSIZE=800,DSORG=PS)Explanation:
JOB Statement: Starts the job named CRTEGEN.
EXEC Statement: Runs the IEFBR14 utility to create a dataset without doing any processing.
DD Statement:
DSN=MYDATA.BACKUP. REPORT(+1): Creates a new generation under the GDG base(MYDATA.BACKUP.REPORT). (+1) means "next generation" or "new generation"
DISP=(NEW,CATLG,DELETE): Creates and catalogs the dataset if successful, deletes it if the job fails.
SPACE=(CYL,(1,1),RLSE): Allocates 1 cylinder initially and 1 cylinder as secondary.
In a GDG each generation represents a different version of the dataset created over time.
Instead of using the full dataset name every time (like MYDATA.BACKUP.REPORT.G0005V00), we can refer to generations relatively
(0) always points to the latest (most recently created) generation.
(-1) points to the generation created just before the latest.
(-2) points to the one created before that
Similarly (-3), (-4), etc.
EXAMPLE
GDG base is MYDATA.BACKUP.REPORT and you have these generations:
MYDATA.BACKUP.REPORT.G0005V00 (latest)
MYDATA.BACKUP.REPORT.G0004V00
MYDATA.BACKUP.REPORT.G0003V00
In JCL or a program
Referring to MYDATA.BACKUP.REPORT(0) means you are accessing G0005V00
Referring to MYDATA.BACKUP.REPORT(-1) means you are accessing G0004V00
Referring to MYDATA.BACKUP.REPORT(-2) means you are accessing G0003V00
Referencing an existing generation using (0) and (-1)
Example: 1
Referencing an existing GDG generation
Explanation:
DSN=MYDATA.BACKUP.REPORT(0): Refers to the current (most recent) generation (MYDATA.BACKUP.REPORT.G0005V00).
Example: 2
Referencing the previous generation
Explanation:
DSN=MYDATA.BACKUP.REPORT(-1): Refers to the previous generation (the one created just before the current one) (MYDATA.BACKUP.REPORT.G0004V00)
ALTERing a GDG Base
Sometimes after creating a GDG base you might realize you need to change its properties like increasing the limit of generations or switching from NOEMPTY to EMPTY.
Instead of deleting and recreating the base you can simply use the ALTER command to update the existing GDG base definition
You can change attributes like LIMIT, EMPTY/NOEMPTY, and SCRATCH/NOSCRATCH.
You cannot change the GDG name with ALTER command you would need to delete and recreate if you want a new name.
EXAMPLE
Explanation:
This JCL alters the GDG base MYDATA.BACKUP.REPORT to allow 10 generations instead of its previous limit.
DELETEing a GDG Base:
If you no longer need a GDG base (and its associated generations), you can use the DELETE command.
An overview of the MVS. The presentation includes topics on parallel sysplex, hardware, services, security and health checks.
This is a technical, in-depth presentation on using dynamic allocations, dynamic allocation services, control blocks, and a detailed examination of a dynamic allocation example program.
GDGs don’t support VSAM datasets
UNIT=SYSDA: Allocates on the default system storage device.
DCB=(RECFM=FB,LRECL=80,BLKSIZE=800,DSORG=PS): Defines the dataset attributes: Fixed Block format, 80-byte record length, and it’s a sequential dataset.

//ACESGDG JOB (12345),CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=MYPROGRAM
//INPUT DD DSN=MYDATA.BACKUP.REPORT(0),
// DISP=SHRIf the generations were created with SCRATCH, they will be completely removed from both catalog and disk.
If the generations were created with NOSCRATCH, they will be uncataloged only and still reside on disk. You will need to know their exact names to access them later.
EXAMPLE:
MYDATA.BACKUP.REPORT <-----— GDG base MYDATA.BACKUP.REPORT.G0001V00 <-----— GDG generation MYDATA.BACKUP.REPORT.G0002V00 <-----— GDG generation MYDATA.BACKUP.REPORT.G0003V00 <-----— GDG generation
Explanation:
DELETE MYDATA.BACKUP. REPORT: Specifies the name of the GDG base that you want to delete.
GDG:Tells the system that the target to be deleted is a Generation Data Group (GDG) base not a regular dataset.
FORCE:Used to delete the GDG base even if it has active generation datasets under it.deletes the base along with all of its generations.
PURGE:Forces the delete, even if the dataset is in use or protected. Useful when normal delete doesn't work
//ALTERGDG JOB (12345),CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP01 EXEC PGM=IDCAMS
//SYSIN DD *
ALTER MYDATA.BACKUP.REPORT LIMIT(10)
/*
//DELETEGDG JOB (12345),CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP01 EXEC PGM=IDCAMS
//SYSPRINT DD SYSOUT=*
//SYSIN DD *
DELETE MYDATA.BACKUP.REPORT GDG FORCE or PURGE
/*
//ACESGDG JOB (12345),CLASS=A,MSGCLASS=X,NOTIFY=&SYSUID
//STEP1 EXEC PGM=MYPROGRAM
//INPUT DD DSN=MYDATA.BACKUP.REPORT(-1),
// DISP=SHRBusiness Application and Software Product programmer job roles differ significantly across the 5 IBM Z mainframe ecosystem categories.
Definition:
Skills:
Resources:
Category:
Definition:
Skills:
Resources:
Category:
Data Scientist**
Data Engineer**
Data Analyst**
Definition:
Skills:
Resources:
Category:
Published Articles on Mainframe Technologies
Experienced professionals, educators, and contributors – your feedback is needed --- agree, disagree, additional insights about job roles, responsibilities, skills, experience level, jobs within the categorized IBM Z mainframe community, etc.
In a mainframe IT organization, various roles work together to ensure the smooth operation, development, and management of mainframe systems. Here are the roles in the key roles typically found in a mainframe IT organization:
Programmers, including system programmers, application programmers;
Administrators, including system administrator (SysAdmin), database administrator (DBA), storage administrator, security administrators, network administrator;
Data Storyteller**
Machine Learning Specialist**
Analysts and Operators, including system operator, operations analyst, capacity planner, performance analyst, and production control analyst;
Other mainframe roles include mainframe architects, program designers, technical support specialists, project managers, and vendors.
We will break them down to get you familiarized to each of these roles.
Role: Installs, configures, and maintains the mainframe operating system (e.g., z/OS) and associated software. They are responsible for system tuning, troubleshooting, and implementing updates and patches.
Skills: Deep understanding of operating systems, scripting languages (e.g., REXX), assembler language, system utilities, and problem-solving.
Role: Installs, configures, and maintains the mainframe operating system (e.g., z/OS) and associated software. They are responsible for system tuning, troubleshooting, and implementing updates and patches.
Skills: Deep understanding of operating systems, scripting languages (e.g., REXX), assembler language, system utilities, and problem-solving.
Role: Designs, develops, tests, and maintains applications that run on mainframe systems using languages like COBOL, PL/I, Java, and Assembler. They also work on modernizing legacy applications.
Skills: Proficiency in mainframe programming languages, knowledge of CICS, IMS, JCL, and debugging tools.
Role: Manages mainframe databases such as DB2, IMS, or IDMS. Responsibilities include database design, implementation, performance tuning, backup and recovery, and ensuring data security and integrity.
Skills: Expertise in database management, SQL, performance tuning, and understanding of storage systems.
Role: Oversees the mainframe's day-to-day operations, manages system resources, user accounts, and security settings, and ensures optimal performance and availability.
Skills: Knowledge of z/OS, security protocols, automation tools, and performance monitoring.
Role: Monitors the mainframe's operational status, manages job scheduling and batch processing, responds to system alerts, and performs routine checks to ensure system health.
Skills: Familiarity with job scheduling tools (e.g., CA-7, Control-M), system monitoring, and problem resolution.
Role: Manages security protocols, user access, and compliance with security standards on the mainframe. They use tools like RACF, ACF2, or Top Secret to enforce security policies.
Skills: Knowledge of security software, compliance standards, and risk management.
Role: Manages mainframe storage systems, including DASD (Direct Access Storage Device) and tape storage. They handle storage allocation, performance tuning, and data backup and recovery.
Skills: Expertise in storage management tools, performance analysis, and data backup strategies.
Role: Analyzes current and future resource needs to ensure the mainframe environment can handle workload demands. They focus on optimizing performance and planning for upgrades or expansions.
Skills: Analytical skills, performance monitoring, capacity planning tools, and workload management.
Role: Monitors and analyzes system performance to identify bottlenecks and optimize the mainframe environment. They work closely with system programmers and DBAs to tune the system.
Skills: Performance monitoring tools, data analysis, and knowledge of system internals.
Role: Manages the mainframe network connections, ensuring secure and reliable communication between the mainframe and other systems. They configure and troubleshoot network protocols and interfaces.
Skills: Networking knowledge, experience with protocols like TCP/IP, SNA, and network monitoring tools.
Role: Manages and schedules production workloads, ensuring that jobs run on time and without errors. They coordinate with other teams to implement changes and maintain job schedules.
Skills: Job scheduling software, change management, and attention to detail.
Role: Provides support for mainframe-related issues, assists users with technical problems, and works to resolve incidents that affect the mainframe environment.
Skills: Problem-solving, communication skills, and knowledge of mainframe systems and tools.
Role: Oversees mainframe-related projects, coordinating between different teams to ensure project goals are met on time and within budget. They manage timelines, resources, and communication.
Skills: Project management, leadership, and understanding of mainframe environments.
Role: Designs and oversees the overall structure of mainframe systems, ensuring they meet the organization’s requirements. They provide guidance on best practices and future technology direction.
Skills: Systems architecture, strategic planning, and in-depth mainframe knowledge.
Think large banks, insurance companies, financial institutions, manufacturing, distribution, store retail, telecommunications, government services, airlines, hotels, etc. These are enterprises with millions of customers, billions of computer processed transactions involving many billions of dollars. Each need IT (Information Technology) support and services 24 hours a day, 7 days a week, 365 days a year availability, reliability, serviceability, scalability, and security typically provided by IBM Z mainframe hardware, software, and an IT organization within the enterprise.
The job roles and responsibilities include significant separation of duties. As a result, each job role includes task that are typically well defined with senior technicians available to mentor new hires to become technically proficient.
These jobs include an expectation to learn the business over time to grasp the significant of a given job role within the larger context of supporting the business.
Consider the question, “What is Blue Cross / Blue Shield?”. Many might respond, “a health care organization”. Blue Cross / Blue Shield for a given state is a large IT organization processing related to health care insurance agreements and health care insurance claims.
Many of these large enterprises must operate using mandated regulations, policies, and procedures. A regulation of large IT organizations providing support and services to a large enterprise is a separation of duties and responsibilities. The regulation is a safeguard to mitigate risk. The separation of duties results in well defined specific job responsibilities and skills.
The good news is individual job responsibilities give full attention to a very specific IT job enabling the employee to get extremely skilled within the specific job responsibility. These jobs will have well defined job procedures and tasks completed by numerous technicians from entry level to highly experienced. These jobs at an entry level can be a great place to start a career. Many of these IT jobs can be mastered quickly because they are typically well defined tasks that entry level would learn from highly experienced technicians in the group.
The bad news is lack of exposure and understanding of the other IBM Z mainframe job responsibilities and skills. Once you become one of the highly experienced members of the group, it is best to develop leadership skills to mentor the early career hires while looking for another job role either within the many job roles in the company or outside the company.
A great advantage of learning IBM Z mainframe, the technology is a critical tool of business applied to wide variety industries. Your skill with the critical IBM Z mainframe business tool makes you valuable to many employers.
The medium and small enterprises will have smaller number of customers, smaller total revenue, and typically do not have the strict regulation requirements that are mandatory at most larger enterprises. As a result, the medium and small enterprise IT organizations combine job roles and responsibilities.
The good news is those with significant experience from medium and small enterprises typically have more diverse experience and awareness of all IBM Z mainframe technology after several years of on the job experience.
The bad news is the learning journey involving a variety of combined job roles and responsibilities will take longer to master. Also, the medium and small IT organizations are more likely to recruit those with experience from other IT organizations.
What all the enterprise IT production organizations have in common is collection, storage, and processing of data securely. What makes various industry enterprise IT production different is the volume of data, variety of data, regulations applied to the data, and speed at which the data must be processed.
The job roles and responsibilities include significant combination of duties discussed in the Large Enterprise IT Production. As a result, each job role includes tasks that are diverse with senior technicians available to mentor new hires to become technically proficient. Mastery of the combined job roles will take longer.
These jobs include an even higher expectation to learn the business over time to grasp the significant of a given job role within the larger context of supporting the business.
While mastering combination of job roles will take longer, the outcome is significant responsibility with higher compensation and personal value.
Examples of combined job roles include:
· Systems Programmer responsibilities for all System Administration, Performance and Capacity Planning, Systems and Solution Architect Consultation.
· Operations responsibilities for all Operations which potentially could include Systems Programming and Systems Administrator job responsibilities.
Independent Software Vendors, ISVs, provide software to production IT organizations necessary to run and support the Production IT business application services.
Enterprise IT may have software engineers titles, job roles and responsibilities. It is more likely Independent Software Vendor and Consulting/Integration organizations will have software engineer titles as a result of specific software product offerings and/or service/support offerings provided to enterprise IT organizations.
IBM provides the IBM Z mainframe hardware with support and services. IBM provides the IBM Z mainframe operating systems such as z/OS with support and services.
Production IT organizations will typically purchase software from ISVs. It is in the best interest of production IT organizations to purchase software which comes with on-going support, services, maintenance, and feature/function enhancements. In the long run it is far more cost effective for the company than writing and maintaining software in the majority of situations. Many production IT job responsibilities include installation, configuration, implementation, and production support for these ISV software products.
Production IT support personal have an immediate escalation support organization to contact when problems are encountered involving these ISV software products. The ISV support includes highly experienced technical personal that enables you to resolve problems with their products quickly while advancing your technical skills.
In addition to IBM providing the IBM Z mainframe, IBM provides hardware support and services, operating system support and services, and operating system software product support and services. Many production IT job responsibilities include installation, configuration, implementation, and production support for IBM Z hardware, operating systems, and software products.
Production IT support personnel have an immediate escalation support organization to contact when problems are encountered involving the IBM Z mainframe, operating systems, and software products. The IBM support organization include highly experienced technical personal that enables you to resolve problems with IBM products quickly while advancing your technical skills.
ISVs and IBM have excellent relationships. IBM and ISVs objective is to best serve the mutual customers that use the IBM Z mainframe. Additionally, offering customer choice is just good business for ISVs and IBM. Frequently, ISVs fill opportunity gaps with products, features, and functions not available in IBM software products.
The job roles and responsibilities are significantly different from Enterprise IT Production and Facility and Service Providers.
ISVs create, maintain, and support IBM Z mainframe software products used by Enterprise IT businesses.
The job roles are focused on software product creation, maintenance, support, product consulting, product marketing, and product sales.
The ISV job roles can be entry level and progress to high experience levels. The various job roles are determined by the needs to provide software products, support, and services need by Enterprise IT and the Facility and Service Providers.
An ISV Systems/Solution Architects would work closely with ISV marketing and sales associates to understand and article the needs of the Enterprise IT and Facility/Service Provides to provide product programming developers tasks to build, maintain, advance, and support the ISV software product portfolio.
Experienced ISV product developers and ISV product support personnel need to possess technical skills and proficiency with parts for the IBM Z mainframe operating system, system programming, and systems administration technical responsibilities in order to support the ISV software product within the Enterprise IT or Facility/Service Provider organizations. Experienced ISV associates enable technical growth of Enterprise IT or Facility/Service Provider as a result of the ISV technical support.
IBM has many thousands of employees responsible for design, build, and support of the IBM Z mainframe hardware, operating systems, and software products used by Enterprise IT or Facility/Service Providers. IBM, like the ISVs, have highly specialized job responsibilities focused on hardware, specific components of the operating systems, and software products.
Like the ISVs, IBM marketing, sales, architects, and consultants work closely with each other and their assigned Enterprise IT or Facility/Service Provider organizations requiring them to have awareness of the various Enterprise IT or Facility/Service Provider job responsibilities to best help Enterprise IT or Facility/Service Provider support their tasks and advancing their technology.
Development, maintenance, enhancement, and modernizing large scale end-to-end business application services requires deep industry specific knowledge accompanied by knowledge and awareness of IT hardware and software to be proposed and implemented by various IT departments.
While IBM Z mainframe hardware and software are critical, end-to-end business solutions involve technology external to the IBM mainframe such as networks, Point of Sale (POS) devices, Automatic Teller Machines (ATM) devices, cell phones, internet browsers, and wide variety of emerging technology devices known as Internet of Things (IOT).
Consulting and System Integration organizations are frequently retained by large enterprise production IT Management, and senior technical staff members to assist with development, maintenance, enhancement, and modernizing large scale end-to-end business application services specific to their industry.
An internet search ‘business consulting and system integration’ provides insight into this job category. These organizations need business oriented thinking staff with technology specific skills such as IBM Z mainframe technology.
These job roles are primarily business and industry focused but require a much better than average knowledge and awareness of the distinctive strengths of varies hardware and software technology to propose solutions.
The IBM Z mainframe is playing a significantly bigger role in the large scale business solutions today and will be playing a bigger role in the future as a result of the ‘holy grail’ of IT business application processing – prescriptive analytics at time of transaction. Recent technology advancements in IBM Z mainframe hardware and software opened the door for development and implementation of prescriptive analytics and time of transaction. A set of ‘Data Science’ job roles exist and will grow as a result.
An internet search ‘prescriptive analytics’ provides insight into this job category. While IBM and ISVs provide software to help implement prescriptive analytics, the purchase of software follows a developed business solution plan by the production IT organization typically with the assistance of System Integration Consultants.
Consulting and System Integration focus is on industry targeted Enterprise IT organizations with the intention to assist the organization decision making involved with design/plan for new business services and modernization of existing business services. These job roles need a surface level awareness of the business organization, various technical roles, responsibilities, and skills to accomplish their mission as trusted advisories to the Enterprise IT executives, management, and senior level technicians to make good technology decisions.
Internet search ‘Business Consulting and System Integration’ to gain insights about this IBM Z mainframe ecosystem category. You will find many of the well know business consulting organizations that are well known by enterprise executives.
While this is a separate IBM Z mainframe ecosystem category, it is common for Enterprise IT and Facility/Service Providers to engage with IBM, ISVs, and the Business Consulting and System Integration organizations to evaluate various design/plan options to technology advance and grow the Enterprise IT and Facility/Service Providers.
Large, medium and small enterprise businesses along with government entities exist to products and services. The company executives may decide their primary mission may not be IT and might negotiate with a Facilities and Service Provider to use their Data Center
An enterprise IT organization is mandatory to provide enterprise business services. It is possible for the enterprise to pay a different company to provide IBM Z mainframe data center facilities hardware, software, network, technical support and services, commonly known as Infrastructure as a Service, IaaS.
The Facilities and Service Providers job roles are highly focused on the hardware, software, and network technology. The business organization may choose to focus exclusively the on the business and business services.
Contracting with a Facility and Service Provider requires the enterprise to have staff that is technically proficient to supervisor the day-to-day IT services in addition to details involving performance, capacity, service level agreements, variable costs of services, risk mitigation, oversight of data security, requirements for new features and functions.
Facility and Service Providers technical job roles are very similar to those of the Enterprise IT Production organizations. The difference separation from knowledge of the business. Another difference is Facility and Service Provider technicians skills and tasks are frequently applied to many different Enterprise IT Production organizations.
Definition: IT Software Engineers work with operating systems, applications, and programs. They work with system programmers, analysts, and other engineers to design systems, project capabilities, and determine performance interfaces. They analyze user needs, provide consultation services to discuss design elements, and coordinate software installation. Software Engineer title is frequently associated with ISV software product development and can be used as a generic title for many system administration support and service job roles.
Skills:
Resources:
Category:
IT Architect roles and responsibilities are diverse. These are rarely entry level positions. They are job responsibilities requiring specific career experiences. An architect translates the needs of a business into big picture blueprints for implementation. Architect skills and capabilities are acquired through experiences with specific business experience to be applied to future business technology decisions. What all architects have in common is methodologies' used to guide the development process and ensure that the final product meets the organization’s specific needs.
Definition: involves the organization of software systems leveraged by a company and is a recognized visionary strategist.**
Skills:
Resources:
Category:
Definition: must take a broader view and check whether the solution strategy chosen by the solution architect is in accord with the company’s mission. To do that, the enterprise architect should be aware not only of the organization’s internal policies and goals but of the environment as well.**
Skills:
Resources:
Category:
Definition: involves the physical placement of all software components on hardware.**
Skills:
Resources:
Category:
Definition: works in close cooperation to deliver the best result, providing a link between the strategic idea and its technical implementation. While executing an IT project, technical architects adopt a hands-on approach, which requires an exceptional level of in-depth proficiency. This requirement conditions two peculiarities of technical architect vs solution architect.**
Skills:
Resources:
Category:
Duplicate the to create your own page, and introduce us to your journey!
Note: You cannot edit the title from the TOC
Be sure not to share any screenshots from proprietary information from your work!
Leanne Wilson, from raw recruit to senior consultant:
Job responsibilities ensure the IBM Z mainframe is available to the business and business customers, resolving problems, advancing hardware and software technology, controlling change, and serving as a focus point for problem management and resolution.
Definition: Responsible for the day-to-day operations and maintenance of the mainframe system, including monitoring system performance, running backups, and responding to system alerts.
Skills: Proficiency in mainframe operating systems (e.g., z/OS), knowledge of system monitoring tools, troubleshooting abilities, and strong communication skills.
Resources: IBM manuals, system documentation, monitoring tools.
Category: Operations and Support
Definition: Manages job scheduling, batch processing, and ensures that production runs smoothly without conflicts or delays.
Skills: Expertise in job scheduling tools (such as CA-7, Control-M), understanding of production environments, strong organizational skills.Resources: Job scheduling software manuals, system logs, production schedules.
Resources: Job scheduling software manuals, system logs, production schedules.
Category: Operations and Support
Definition: Manages mainframe storage, including allocating disk space, optimizing performance, and ensuring data integrity.
Skills: Knowledge of storage management tools, understanding of disk architectures, troubleshooting skills.
Resources: Storage system documentation, performance monitoring tools, disk allocation guidelines.
Category: Infrastructure and Storage
Definition: Maintains mainframe security, controls access, implements security policies, and monitors for potential security threats.
Skills: Proficiency in security tools (such as RACF), knowledge of security protocols, risk assessment abilities.
Resources: Security manuals, protocols documentation, security analysis tools.
Category: Security and Compliance
Definition: Manages mainframe network configurations, connectivity, and ensures smooth data transmission.
Skills: Network protocols knowledge, familiarity with mainframe network tools, troubleshooting network issues.
Resources: Network configuration guides, monitoring tools, network protocol documentation.
Category: Networking
Definition: Manages mainframe databases, including installation, maintenance, and optimization of databases.
Skills: Proficiency in database management systems (DB2, IMS), SQL expertise, troubleshooting database issues.
Resources: Database manuals, database performance tools, SQL guides.
Category: Database Management
CICS Administrator**
Definition: Manages Customer Information Control System (CICS), ensuring its smooth operation, performance, and integration with other systems.
Skills: Proficiency in CICS, problem-solving abilities, familiarity with transaction processing.
Resources: CICS documentation, transaction logs, performance monitoring tools.
Category: Middleware and Transaction Processing
Definition: Manages IBM MQ (formerly MQSeries) for message-oriented middleware, ensuring message queue management and connectivity.
Skills: Knowledge of MQ, troubleshooting message queues, understanding of message-oriented middleware.
Resources: MQ documentation, message queue logs, performance monitoring tools.
Category: Middleware and Messaging
Definition: Manages Information Management System (IMS), ensuring its operation, database connectivity, and performance.
Skills: Proficiency in IMS, database administration skills, troubleshooting IMS-related issues.
Resources: IMS documentation, database logs, performance monitoring tools.
Category: Database Management
Definition: Analyzes system performance, predicts resource needs, and plans for future system expansion or modifications.
Skills: Performance analysis tools, capacity planning methodologies, forecasting skills.
Resources: Performance analysis tools, historical performance data, capacity planning models.
Category: Performance Analysis and Planning
Definition: Oversees the mainframe team, sets strategic goals, and ensures efficient operations and resource allocation.
Skills: Leadership, strategic planning, team management, and decision-making abilities.
Resources: Leadership guides, team performance reports, strategic planning tools.
Category: Leadership and Management
Learn how to build your mainframe skills and earn digital badges.
In 2016 a new way of learning and earning credentials in the IT world was launched. Digital Certificate Badges allow you to easily share your achievements on social media. The Badges are based on a specific topic and expertise area. As you learn, you earn Digital Certificates Badges to add to your curriculum, on your social media and share with others. It is a great new way of learning that helps you to increase your skills while you also engage with the community.
Mainframe ecosystem embraced this approach, and we can find most of the mainframe content in a Badge layout. In this section you will find some initial free of charge recommendations on how to earn Badges and start to leverage your skills in mainframe.
Provide Opportunities – Jobs, Vitality Program, etc.
The goal of this session is to share and provide Opportunities – Jobs, Vitality Program. If you know where to find relevant information about job opportunities we invite you to share it in sections below.



System Programmers: Responsible for the installation, maintenance, and customization of the mainframe operating system and related software.
Application Developers: Focus on designing, developing, testing, and maintaining applications that run on the mainframe.
Database Administrators: Manage and oversee database systems, ensuring their availability, performance, and security.
Security Analysts: Protect the mainframe environment by implementing security measures and monitoring potential threats.
Operations Analysts: Ensure the efficient operation of mainframe hardware and software, troubleshooting issues as they arise.
Technical Support Specialists: Provide assistance and solutions to users encountering technical difficulties with mainframe systems.
Network Administrators: Oversee the connectivity and communication between the mainframe and other networks.
IBM Z mainframe jobs are within and separated by 5 unique employer categories serving governments and all industries globally.
Generic professional job responsibilities of government and all industries include:
In-depth module training based on role chosen, includes Hands-on Labs, etc.
Name:
[Your name here]
Location:
[This section focuses on origin story and the transition.]
Before the Mainframe: My background? (e.g., College major, previous career, self-taught. Include visuals or links as needed)
The Turning Point: What was the moment or reason you decided to pursue a career in mainframe technology?
Key Learning Resources: The most valuable courses, programs, or certifications that helped you me started: (e.g., IBM Z Explore, Open Mainframe Project training, specific university courses.)
This section brings the role to life for the reader.
What is the core problem you solve for your company? (Keep this high-level and business-focused.)
What are the top 3 technologies or tools you use every day? (e.g., JCL, VS Code, Zowe, REXX, DB2, CICS.)
What is the most interesting or rewarding part of your job?
What is one key skill (technical or soft) you wish you had learned sooner?
What is your top piece of advice for a student or professional trying to break into the industry?
Where do you see the most exciting growth or opportunity in the mainframe space right now?
Explore one of the prominent digital badge in the mainframe field.
One good recommendation to start is the z/OS Mainframe Practitioner Badge by IBM.
When you obtained this badge, you have developed foundational skills in IBM Z hardware and software, especially around z/OS and System Administration. Through hands-on labs working on a live IBM Z server, you have gained real world experience and is equipped to pursue a career as a mainframe application developer, system programmer, system administrator, or a DBA Practitioner.
Technical Specialist | Security | Telecommunications | Adaptability & Flexibility | Linux | Mainframe | Z Systems | IBM Z
Complete all courses in the IBM z/OS Mainframe Practitioner Professional Certificate program on Coursera (including quizzes, hands-on assignments and projects), and earn the following badges:
Check out some of our partner programs that offer Mainframe related courses and development programs:
Broadcom's Vitality program is an innovative skills development program to cultivate next-generation mainframe talent at low to no cost for Broadcom customers. For those that are interested in a career in Mainframe, please reach out to mainframe.vitality@broadcom.com for more information about the program and how you can participate.
IBM's Apprenticeship Program provides an entry point into IBM for candidates with relevant skills who may not have a traditional college degree — this skills-first approach to talent is what we call a “New Collar” Initiative. For more information, click .
Interskill Learning develops and delivers the global Z Mainframe Computing Industry's only comprehensive curriculum of self-paced e-learning. To take advantage to Interskill's Learning training, click .
ProTech offers thousands of different courses in topics ranging literally from A-to-Z - From A+ certification to z/OS Mainframe systems. ProTech is an elite Tier1 Managed Microsoft Partner for Learning Services and partnered with such leading firms as IBM, Broadcom, HP, Apple, Cisco, Citrix and more. Their curriculum is endorsed by both the Project Management Institute, the International Institute for Business Analysis, and aligned with the ScrumAlliance, SAFe and Scrum.org certifications. Click for more information.
Courses, Tutorials, Manuals
Education Programs
Digital Certificate Badges

[Feel free to be as vague as you want, Continent, Earth, or rep your home town, just remember this is public]
LinkedIn:
[Share if you'd like, this will be a more "current" snapshot of you as your story evolves, too]
Primary Mainframe Focus:
[To futureproof, you can put a general area, and get into more of the specifics below]
Amount of time in the Mainframe Space as of writing:
[Since this will live on long past when you write this, we can capture "as of writing" in a format X years as of 2026 for example.]


System /CICS Administrator
Systems Programmer
Automation Specialist
Here is the journey of Anna McKee
Today, there are Universities and Companies that have created their own education programs to ensure the continuation of upskilling those who are interested in learning and working in the Mainframe area.
Broadcom's Vitality program is an innovative skills development program to cultivate next-generation mainframe talent at low to no cost for Broadcom customers. For those that are interested in a career in Mainframe, please reach out to mainframe.vitality@broadcom.com for more information about the program and how you can participate.
IBM's Apprenticeship Program provides an entry point into IBM for candidates with relevant skills who may not have a traditional college degree — this skills-first approach to talent is what we call a “New Collar” Initiative. For more information, click .
Interskill Learning develops and delivers the global Z Mainframe Computing Industry's only comprehensive curriculum of self-paced e-learning. To take advantage to Interskill's Learning training, click .
The follow is a list of universities that have an active mainframe course or mainframe curriculum
Tips for your journey on your career path:
Grow your Network:
Join Communities
Meet mentors
Continue Learning:
Ready to start your mainframe career?
Check the mainframe job postings on the following sites:
Planet Mainframe
Indeed
IBM SkillsDepot
You can also search for job opportunities on general job boards like and Indeed.
Some ideas on key words to search for:
Mainframe developer
Mainframe administrator
CICS
JCL
Don't forget, there are many areas and careers that relate to the mainframe without necessarily having a background in the above items!
Revisit our Category Definitions to look at the types of companies who may have posting in the above categories, that also may have jobs for the roles adjacent to those positions.
The list below highlights several courses, tutorials, and manuals available on various mainframe-related topics. Neither this project nor Open Mainframe Project reviews, maintains, or endorses any one of these courses, tutorials, and manuals.
IBM Redbooks
Develop technical know-hows on IBM products
IBM z/OS Internet Library
Access all of IBM z/OS manuals
IBM Z Education and Training
Find out more about how to further develop your IBM Z skills
IBM Z Xplore
Join IBM Z's hands-on, virtual platform experience
Coursera: IBM z/OS Mainframe Practitioner
Learn more about z/OS and launch your career as a practitioner
Interskill: Mainframe Training Online
Hands-on, on-demand mainframe training
Open Mainframe Project's COBOL Programming Course
Free and open-source COBOL programming course
MainframesTechHelp
Online mainframe tutorial
ProTech ProTech offers thousands of different courses in topics ranging literally from A-to-Z - From A+ certification to z/OS Mainframe systems. ProTech is an elite Tier1 Managed Microsoft Partner for Learning Services and partnered with such leading firms as IBM, Broadcom, HP, Apple, Cisco, Citrix and more. Their curriculum is endorsed by both the Project Management Institute, the International Institute for Business Analysis, and aligned with the ScrumAlliance, SAFe and Scrum.org certifications. Click here for more information.
Hogeschool Ghent
Belgium
Illinois State University
USA
Indian Hills Community College
USA
Marist College
USA
North Carolina A&T
USA
Northern illinois University
USA
St Lawrence College
Canada
Tennesse State University
USA
Universidad de Buenos Aires
Argentina
Universidad Nacional Autonoma de Mexico
USA
Universidad Nacional de la matanza
Argentina
University of Ballarat
Australia
University of Illinois, Springfield
USA
University of Nebraska
USA
University of North Florida
USA
University of North Texas
USA
Vilinus
Lithuania
Western University
Canada
Bergen Community College
USA
Durham College
Canada
East Carolina University
USA
Eastern Illinois University
USA
Faculdade de Tecnologia de Sao Paulo
Brazil
Fanshawe College
Canda
DB2
Network with like-minded individuals through the available mainframe communities.
The list below highlights several mainframe-related communities. With the exception of the Open Mainframe Project, neither this project nor Open Mainframe Project reviews, maintains, or endorses any one of these communities.
The primary user-group for anything mainframe related
Please submit your topics for future vote
Area of knowledge
Below is the "Free Mainframe Training" is a collection of multiple content resources. To be sorted and embedded:
Below is the z/OS Management Facility:
Here is the a link to Steve Perva's Discord channel: https://discord.gg/sze
Here is z/OS Open Tools:
This is the notes from the mainframe modernization resources:
Smarter Planet by IBM - To make it more mainframe innovation-focused:
Gain access to an IBM Z system through the services available here.
The list below highlights several IBM Z providers available for training usage. Neither this project nor Open Mainframe Project reviews, maintains, or endorses any one of these providers.
Get three days access to a system with IBM Z software, along with their learning pathway
Attending mainframe events and conferences is essential for staying up-to-date with the latest industry trends and technologies. It offers a unique opportunity to connect with the community, share insights, and learn from peers, fostering collaboration and continuous growth in the field.
SHARE Often in February and August
Community Day @ IBM TechXchange Kicking off the beginning of IBM TechXchange with a series of sessions incorporating the Open Mainframe Project.
Broadcom Mainframe Technical Exchange Spring - Europe (Prague) | Summer - Virtual | Fall - Plano, Texas
IBM Z Day Virtual 1 Day Events
Guide Share Europe (GSE) Fall / United Kingdom
The hub for Db2 professionals all over the world
Gathering open source in the mainframes
Rich community for IBM Z users to exchange ideas and connect with
Learn, develop, and test mainframe applications on x86 hardware
Cloud native development and testing for z/OS on IBM Cloud
Hands-on IBM Z education
Learn COBOL programming with complementary access to a IBM z/OS instance
Get 120-day access to a VM with full access on LinuxONE
Community Day
Orlando, Florida

