Sunday, January 26, 2020

NoSQL Databases | Research Paper

NoSQL Databases | Research Paper In the world of enterprise computing, we have seen many changes in platforms, languages, processes, and architectures. But throughout the entire time one thing has remained unchanged relational databases. For almost as long as we have been in the software profession, relational databases have been the default choice for serious data storage, especially in the world of enterprise applications. There have been times when a database technology threatened to take a piece of the action, such as object databases in the 1990s, but these alternatives never got anywhere. In this research paper, a new challenger on the block was explored under the name of NoSQL. It came into existence because of there was a need to handle large volumes of data which forced a shift to building bigger hardware platforms through large number of commodity servers. The term NoSQL applies to a number of recent non-relational databases such as Cassandra, MongoDB, Neo4j, and Azure Table storage. NoSQL databases provided the advantage of building systems that were more performing, scaled much better, and were easier to program with. The paper considers that we are now in a world of Polyglot Persistence where different technologies are used by enterprises for the management of data. For this reason, architects should know what these technologies are and should be able to decide which ones to use for various purposes. It provides information to decide whether NoSQL databases can be seriously considered for future projects. The attempt is to provide enough background information on NoSQL databases on how they work and what advantages they will bring to the table. Table of Contents Introduction Literature Technical Aspects Document Oriented Merits Demerits Case Study MongoDB Key Value Merits Demerits Case Study Azure Table Storage Column Stores Merits Demerits Case Study Cassandra Graphs Merits Demerits Case Study Neo4j Conclusion References Introduction NoSQL is commonly interpreted as not only SQL. It is a class of database management systems and is does not adhere to the traditional RDBMS model. NoSQl databases handle a large variety of data including structured, unstructured or semi-structured data. NoSQL database systems are highly optimized for retrieval and append operations and offer less functionality other than record storage. The run time performance is reduced compared to full SQL systems but there is increased gain in scalability and performance for some data models [3]. NoSQL databases prove to be beneficial when a huge quantity of data is to be processed and a relational model does not satisfy the datas nature. What truly matters is the ability to store and retrieve huge amount of data, but not the relationships between them. This is especially useful for real-time or statistical analysis for growing amount of data. The NoSQL community is experiencing a rapid change. It is transitioning from the community-driven platform development to an application-driven market. Facebook, Digg and Twitter have been successful in using NoSQL and scaling up their web infrastructure. Many successful attempts have been made in developing NOSQL applications in the fields of image/signal processing, biotechnology, and defense. The traditional relational database systems vendors also assess the strategy of developing NoSQL solutions and integrating them in existing offers. Literature In recent years with expansion of cloud computing, problems of data-intensive services have become prominent. The cloud computing seems to be the future architecture to support large-scale and data intensive applications, although there are certain requirements of applications that cloud computing does not fulfill sufficiently [7]. For years, development of information systems has relied on vertical scaling, but this approach requires higher level of skills and it is not reliable in some cases. Database partitioning across multiple cheap machines added dynamically, horizontal scaling or scaling-out can ensure scalability in a more effective and cheaper way. Todays NoSQL databases designed for cheap hardware and using the shared-nothing architecture can be a better solution. The term NoSQL was coined by Carlo Strozzi in 1998 for his Open Source, Light Weight Database which had no SQL interface. Later, in 2009, Eric Evans, a Rackspace employee, reused the term for databases which are non-relational, distributed and do not conform to atomicity, consistency, isolation and durability. In the same year, no:sql(east) conference held in Atlanta, USA, NoSQL was discussed a lot. And eventually NoSQL saw an unprecedented growth [1]. Scalable and distributed data management has been the vision of the database research community for more than three decades. Many researches have been focused on designing scalable systems for both update intensive workloads as well as ad-hoc analysis workloads [5]. Initial designs include distributed databases for update intensive workloads, and parallel database systems for analytical workloads. Parallel databases grew to become large commercial systems, but distributed database systems were not very successful. Changes in the data access patterns of applications and the need to scale out to thousands of commodity machines led to the birth of a new class of systems referred to as NoSQL databases which are now being widely adopted by various enterprises. Data processing has been viewed as a constant battle between parallelism and concurrency [4]. Database acts as a data store with an additional protective software layer which is constantly being bombarded by transactions. To handle all the transactions, databases have two choices at each stage in computation: parallelism, where two transactions are being processed at the same time; and concurrency, where a processor switches between the two transactions rapidly in the middle of the transaction. Parallelism is faster, but to avoid inconsistencies in the results of the transaction, coordinating software is required which is hard to operate in parallel as it involves frequent communication between the parallel threads of the two transactions. At a global level, it becomes a choice between distributed and scale-up single-system processing. In certain instances, relational databases designed for scale-up systems and structured data did not work well. For indexing and serving massive amounts of rich text, for semi-structured or unstructured data, and for streaming media, a relational database would require consistency between data copies in a distributed environment and will not be able to perform parallelism for the transactions. And so, to minimize costs and to maximize the parallelism of these types of transactions, we turned to NoSQL and other non-relational approaches. These efforts combined open-source software, large amounts of small servers and loose consistency constraints on the distributed transactions (eventual consistency). The basic idea was to minimize coordination by identifying types of transactions where it didnt matter if some users got old data rather than the latest data, or if some users got an answer while others didnt. Technical Aspects NoSQL is a non-relational database management system which is different from the traditional relational database management systems in significant ways. NoSQL systems are designed for distributed data stores which require large scale data storage, are schema-less and scale horizontally. Relational databases rely upon very structured rules to govern transactions. These rules are encoded in the ACID model which requires that the database must always preserve atomicity, consistency, isolation and durability in each database transaction. The NoSQL databases follow the BASE model which provides three loose guidelines: basic availability, soft state and eventual consistency. Two primary reasons to consider NoSQL are: handle data access with sizes and performance that demand a cluster; and to improve the productivity of application development by using a more convenient data interaction style [6]. The common characteristics of NoSQL are: Not using the relational model Running well on clusters Open-source Built for 21st century web estates Schema less Each NoSQL solution uses a different data model which can be put in four widely used categories in the NoSQL Ecosystem: key-value, document, column-family and graph. Of these the first three share a common characteristic of their data models called aggregate orientation. Next we briefly describe each of these data models. 3.1 Document Oriented The main concept of a document oriented database is the notion of a document [3]. The database stores and retrieves documents which encapsulate and encode data in some standard formats or encodings like XML, JSON, BSON, and so on. These documents are self-describing, hierarchical tree data structures and can offer different ways of organizing and grouping documents: Collections Tags Non-visible Metadata Directory Hierarchies Documents are addressed with a unique key which represents the document. Also, beyond a simple key-document lookup, the database offers an API or query language that allows retrieval of documents based on their content. img1.jpg Fig 1: Comparison of terminology between Oracle and MongoDB 3.1.1 Merits Intuitive data structure. Simple natural modeling of requests with flexible query functions [2]. Can act as a central data store for event storage, especially when the data captured by the events keeps changing. With no predefined schemas, they work well in content management systems or blogging platforms. Can store data for real-time analytics; since parts of the document can be updated, it is easy to store page views and new metrics can be added without schema changes. Provides flexible schema and ability to evolve data models without expensive database refactoring or data migration to E-commerce applications [6]. Demerits Higher hardware demands because of more dynamic DB queries in part without data preparation. Redundant storage of data (denormalization) in favor of higher performance [2]. Not suitable for atomic cross-document operations. Since the data is saved as an aggregate, if the design of an aggregate is constantly changing, aggregates have to be saved at the lowest level of granularity. In this case, document databases may not work [6]. .3.1.3 Case Study MongoDB MongoDB is an open-source document-oriented database system developed by 10gen. It stores structured data as JSON-like documents with dynamic schemas (MongoDB calls the format BSON), making the integration of data in certain types of applications easier and faster. The language support includes Java, JavaScript, Python, PHP, Ruby and it also supports sharding via configurable data fields. Each MongoDB instance has multiple databases, and each database can have multiple collections [2,6]. When a document is stored, we have to choose which database and collection this document belongs in. Consistency in MongoDB database is configured by using the replica sets and choosing to wait for the writes to be replicated to a given number of slaves. Transactions at the single-document level are atomic transactions a write either succeeds or fails. Transactions involving more than one operation are not possible, although there are few exceptions. MongoDB implements replication, providing high availability using replica sets. In a replica set, there are two or more nodes participating in an asynchronous master-slave replication. MongoDB has a query language which is expressed via JSON and has variety of constructs that can be combined to create a MongoDB query. With MongoDB, we can query the data inside the document without having to retrieve the whole document by its key and then introspect the document. Scaling in MongoDB is achieved through sharding. In sharding, the data is split by certain field, and then moved to different Mongo nodes. The data is dynamically moved between nodes to ensure that shards are always balanced. We can add more nodes to the cluster and increase the number of writable nodes, enabling horizontal scaling for writes [6, 9]. 3.2 Key-value A key-value store is a simple hash table, primarily used when all access to the database is via primary key. They allow schema-less storage of data to an application. The data could be stored in a data type of a programming language or an object. The following types exist: Hierarchical key-value store Eventually-consistent key-value store, hosted services, key-value chain in RAM, ordered key-value stores, multi value databases, tuple store and so on. Key-value stores are the simplest NoSQL data stores to use form an API perspective. The client can get or put the value for a key, or delete a key from the data store. The value is a blob that is just stored without knowing what is inside; it is the responsibility of the application to understand what is stored [3, 6]. 3.2.1 Merits Performance high and predictable. Simple data model. Clear separation of saving from application logic (because of lacking query language). Suitable for storing session information. User profiles, product profiles, preferences can be easily stored. Best suited for shopping cart data and other E-commerce applications. Can be scaled easily since they always use primary-key access. 3.2.2 Demerits Limited range of functions High development effort for more complex applications Not the best solution when relationships between different sets of data are required. Not suited for multi operation transactions. There is no way to inspect the value on the database side. Since operations are limited to one key at a time, there is no way to operate upon multiple keys at the same time. 3.2.3 Case Study Azure Table Storage For structured forms of storage, Windows Azure provides structured key-value pairs stored in entities known as Tables. The table storage uses a NoSQL model based on key-value pairs for querying structured data that is not in a typical database. A table is a bag of typed properties that represents an entity in the application domain. Data stored in Azure tables is partitioned horizontally and distributed across storage nodes for optimized access. Every table has a property called the Partition Key, which defines how data in the table is partitioned across storage nodes rows that have the same partition key are stored in a partition. In addition, tables can also define Row Keys which are unique within a partition and optimize access to a row within a partition. When present, the pair {partition key, row key} uniquely identifies a row in a table. The access to the Table service is through REST APIs [6]. 3.3 Column Store Column-family databases store data in column-families as rows that have many columns associated with a row key. These stores allow storing data with key mapped to values, and values grouped into multiple column families, each column family being a map of data. Column-families are groups of related data that is often accessed together. The column-family model is as a two-level aggregate structure. As with key-value stores, the first key is often described as a row identifier, picking up the aggregate of interest. The difference with column-family structures is that this row aggregate is itself formed of a map of more detailed values. These second-level values are referred to as columns. It allows accessing the row as a whole as well as operations also allow picking out a particular column [6]. 3.3.1 Merits Designed for performance. Native support for persistent views towards key-value store. Sharding: Distribution of data to various servers through hashing. More efficient than row-oriented systems during aggregation of a few columns from many rows. Column-family databases with their ability to store any data structures are great for storing event information. Allows storing blog entries with tags, categories, links, and trackbacks in different columns. Can be used to count and categorize visitors of a page in a web application to calculate analytics. Provides a functionality of expiring columns: columns which, after a given time, are deleted automatically. This can be useful in providing demo access to users or showing ad banners on a website for a specific time. 3.3.2 Demerits Limited query options for data High maintenance effort during changing of existing data because of updating all lists. Less efficient than all row-oriented systems during access to many columns of a row. Not suitable for systems that require ACID transactions for reads and writes. Not good for early prototypes or initial tech spikes as the schema change required is very expensive. 3.3.3 Case Study Cassandra A column is the basic unit of storage in Cassandra. A Cassandra column consists of a name-value pair where the name behaves as the key. Each of these key-value pairs is a single column and is stored with a timestamp value which is used to expire data, resolve write conflicts, deal with stale data, and other things. A row is a collection of columns attached or linked to a key; a collection of similar rows makes a column family. Each column family can be compared to a container of rows in an RDBMS table where the key identifies the row and the row consists on multiple columns. The difference is that various rows do not need to have the same columns, and columns can be added to any row at any time without having to add it to other rows. By design Cassandra is highly available, since there is no master in the cluster and every node is a peer in the cluster. A write operation in Cassandra is considered successful once its written to the commit log and an in-memory structure known as memtable. While a node is down, the data that was supposed to be stored by that node is handed off to other nodes. As the node comes back online, the changes made to the data are handed back to the node. This technique, known as hinted handoff, for faster restore of failed nodes. In Cassandra, a write is atomic at the row level, which means inserting or updating columns for a given row key will be treated as a single write and will either succeed or fail. Cassandra has a query language that supports SQL-like commands, known as Cassandra Query Language (CQL) [2, 6]. We can use the CQL commands to create a column family. Scaling in Cassandra is done by adding more nodes. As no single node is a master, when we add nodes to the cluster we are improving the capacity of the cluster to support more writes and reads. This allows for maximum uptime as the cluster keeps serving requests from the clients while new nodes are being added to the cluster. 3.4 Graph Graph databases allow storing entities and relationships between these entities. Entities are also known as nodes, which have properties. Relations are known as edges that can have properties. Edges have directional significance; nodes are organized by relationships which allow finding interesting patterns between the nodes. The organization of the graph lets the data to be stored once and then interpreted in different ways based on relationships. Relationships are first-class citizens in graph databases; most of the value of graph databases is derived from the relationships. Relationships dont only have a type, a start node, and an end node, but can have properties of their own. Using these properties on the relationships, we can add intelligence to the relationship for example, since when did they become friends, what is the distance between the nodes, or what aspects are shared between the nodes. These properties on the relationships can be used to query the graph [2, 6]. 3.4.1 Merits Very compact modeling of networked data. High performance efficiency. Can be deployed and used very effectively in social networking. Excellent choice for routing, dispatch and location-based services. As nodes and relationships are created in the system, they can be used to make recommendation engines. They can be used to search for patterns in relationships to detect fraud in transactions. 3.4.2 Demerits Not appropriate when an update is required on all or a subset of entities. Some databases may be unable to handle lots of data, especially in global graph operations (those involving the whole graph). Sharding is difficult as graph databases are not aggregate-oriented. 3.4.3 Case Study Neo4j Neo4j is an open-source graph database, implemented in Java. It is described as an embedded, disk-based, fully transactional Java persistence engine that stores data structured in graphs rather than in table. Neo4j is ACID compliant and easily embedded in individual applications. In Neo4J, a graph is created by making two nodes and then establishing a relationship. Graph databases ensure consistency through transactions. They do not allow dangling relationships: The start node and end node always have to exist, and nodes can only be deleted if they dont have any relationships attached to them. Neo4J achieves high availability by providing for replicated slaves. Neo4j is supported by query languages such as Gremlin (Groovy based traversing language) and Cypher (declarative graph query language) [6]. There are three ways to scale graph databases: Adding enough RAM to the server so that the working set of nodes and relationships is held entirely in memory. Improve the read scaling of the database by adding more slaves with read-only access to the data, with all the writes going to the master. Sharding the data from the application side using domain-specific knowledge. Conclusions NoSQL databases are still evolving and more number of enterprises is switching to move from the traditional relational database technology to non-relational databases. But given their limitations, they will never completely replace the relational databases. The future of NoSQL is in the usage of various database tools in application-oriented way and their broader adoption in specialized projects involving large unstructured distributed data with high requirements on scaling. On the other hand, an adoption of NoSQL data stores will hardly compete with relational databases that represent reliability and matured technology. NoSQL databases leave a lot work on the application designer. The application design is an important part of the non-relational databases which enable the database designers to provide certain functionalities to the users. Hence a good understanding of the architecture for NoSQL systems is required. The need of the hour is to take advantage of the new trends emerging in the world of databases the non-relational databases. An effective solution would be to combine the power of different database technologies to meet the requirements and maximize the performance.

Saturday, January 18, 2020

A Shopkeeper’s Millennium

While other historians wish to discuss American History in general, Paul E. Johnsons gives focus on one subject alone which he intelligently conveys the message of interconnecting his subject on the general concept of American History. His magnificent book ‘A Shopkeeper’s Millennium’ is a compilation of a 6-years-in-the-making book that entails research about the early nineteenth century rapid transformation in the United States of America and its significance and impact in the long run. As the book claims that Rochester, New York was the first inland boom town in America, it also explains how when and why it calls that way. Having listed those factual evidences to prove the claim, Paul reveals some important accounts from his comprehensive study and statistical analysis. Generally, factors that made such impact to Rochester and to America as a whole are the combinations of three aspects, which are the economic, social, and the political context. The economic support from Rochester to the larger America is primarily due to the construction, opening and flourishing of the transportation of Erie Canal in 1820’s to 1830’s. This transportation system solely caters the flour business of the east, which helped feeding other states. Since the opening of the Erie Canal took place, other frontier cities look up Rochester, New York as the role model for every city and likewise emulate the same ideas for their own prosperity. The upheaval between the North and the South of US during that time does neither strengthen not prolong by the usage of this canal. Furthermore, it helps to initially establish as the bridge of the breach that is going on. Another illustration of economic appreciation favoring Rochester is the growing of local grain milling and manufacturing of agricultural products in this town. Detailed description on how well the farmers and women revolutionize their best potentials from being the second class dollar earners to successful businessmen and businesswomen using only their homes and own backyards as their factories. True that there are enough resources to each and every situation but only few can wisely grab these exposed opportunities and use its maximum potential. These rags to riches story of US also entail stories of unsung heroes of American culture and history. The period of early nineteenth century covers many transitions in US. One of these is its politics. This is the time where the Whigs formed the new political party and called as The Republican. Paul E. Johnson also tackles this issue and discusses where and why Whigs drew support from churches and Democrats from the working class groups, which urged people decide supporting such political party that has a promise beneficial to them in accordance to their interest. Since industrialization takes place in Rochester, the emerging capitalism is likewise created by the society. The government as a reaction needs to amend laws and provisions according to the existing norm in Rochester and in New York. The emergence of industrialization in Rochester, New York particularly in the frontier vicinity of Erie Canal causes distinction of societies. Although it is generally viewed as paternalism and the role of women is vaguely illustrated, participation of both groups develops disparity of roles and principles. Moreover, working class’ group which are usually men build up indifferences with the free moral agency set by mothers and women that are belong to the middle and upper class group. Religious aspect is likewise expanded in Rochester. What could be the role of politics, social, and economic factors in the booming town of Rochester then? How these factors caused changes to Rochester? The only political impact that cause changes to Rochester is that Whigs are supported by the majority of its residents and capitalists, and thus won the elections. The population of Rochester, New York is comprised of mostly working class men that are commonly found drank after working hours, and the morally principled middle and upper class women. Their impact is set as equally important in the booming of this inland town because of their balanced contribution in the society and industry. Lastly, Paul E. Johnsons provide us the essence of Rochester, New York in the history of America by supplying us the thought that the most influential factor, which gave immense impact to Industrial Revolution as a whole, is the economic factor that is set first in the area of Rochester. This is due to the fact that the economy of Rochester where inland transportation scheme in Erie Canal, commercialization thru agriculture, and career shift to every home is done in Rochester during the period of revival or the so-called Second Great Awakening in US.

Friday, January 10, 2020

A Human Resources Management System Essay

A Human Resources Management System (HRMS) or Human Resources Information System (HRIS), refers to the systems and processes at the intersection between human resource management (HRM) and information technology. It merges HRM as a discipline and in particular its basic HR activities and processes with the information technology field, whereas the programming of data processing systems evolved into standardized routines and packages of enterprise resource planning (ERP) software. On the whole, these ERP systems have their origin from software that integrates information from different applications into one universal database. The linkage of its financial and human resource modules through one database is the most important distinction to the individually and proprietary developed predecessors, which makes this software application both rigid and flexible. A Human Capital Management Solution, Human Resources Management System (HRMS) or Human Resources Information System (HRIS), as it is commonly called is the crossing of HR systems and processes with information technology. The wave of technological advancement has revolutionized each and every space of life today, and HR in its entirety was not left untouched by it. What started off with a simple software to help improve the payroll processing of an organization, or a software to track the employee work timings has grown to become the Human Resources systems that helps improve the process efficiency, reduces the cost and time spent on mundane tasks and at the same time improved the overall experience of the employees and the HR professionals. In short, as the role of Human Resources function evolved, HR technology systems also changed the role they were playing. The function of human resources (HR) departments is administrative and common to all organizations. Organizations may have formalized selection, evaluation, and payroll processes. Management of â€Å"human capital† progressed to an imperative and complex process. The HR function consists of tracking existing employee data, which traditionally includes personal histories, skills, capabilities, accomplishments and salary. To reduce the manual workload of these administrative activities, organizations began to electronically automate many of these processes by introducing specialized human resource management systems. HR executives rely on internal or external IT professionals to develop and maintain an integrated HRMS. Before client–server architectures evolved in the late 1980s, many HR automation processes were relegated to  mainframe computers that could handle large amounts of data transactions. In consequence of the high capital investment necessary to buy or program proprietary software, these internally developed HRMS were limited to organizations that possessed a large amount of capital. The advent of client–server, application service provider, and software as a service (SaaS) or human resource management systems enabled higher administrative control of such systems. Currently human resource management systems encompass: 1. Payroll 2. Time and attendance 3. Performance appraisal 4. Benefits administration 5. HR management information system 6. Recruiting/Learning management 7. Performance record 8. Employee self-service 9. Scheduling 10. Absence management 11. Analytics The payroll module automates the pay process by gathering data on employee time and attendance, calculating various deductions and taxes, and generating periodic pay cheques and employee tax reports. Data is generally fed from the human resources and time keeping modules to calculate automatic deposit and manual cheque writing capabilities. This module can encompass all employee-related transactions as well as integrate with existing financial management systems. The time and attendance module gathers standardized time and work related efforts. The most advanced modules provide broad flexibility in data collection methods, labor distribution capabilities and data analysis features. Cost analysis and efficiency metrics are the primary functions. The benefits administration module provides a system for organizations to administer and track employee participation in benefits programs. These typically encompass insurance, compensation, profit sharing and retirement. The HR management module is a component covering many other HR aspects from application to retirement. The system records basic demographic and address data, selection, training and  development, capabilities and skills management, compensation planning records and other related activities. Leading edge systems provide the ability to â€Å"read† applications and enter relevant data to applicable database fields, notify employers and provide position management and position control. Human resource management function involves the recruitment, placement, evaluation, compensation and development of the employees of an organization. Initially, businesses used computer based information systems to: produce pay checks and payroll reports; maintain personnel records; pursue talent management. Online recruiting has become one of the primary methods employed by HR departments to garner potential candidates for available positions within an organization. Talent management systems typically encompass: analyzing personnel usage within an organization; identifying potential applicants; recruiting through company-facing listings; recruiting through online recruiting sites or publications that market to both recruiters and applicants. The significant cost incurred in maintaining an organized recruitment effort, cross-posting within and across general or industry-specific job boards and maintaining a competitive exposure of availabilities has given rise to the development of a dedicated applicant tracking system, or ‘ATS’, module. The training module provides a system for organizations to administer and track employee training and development efforts. The system, normally called a â€Å"learning management system† (LMS) if a standalone product, allows HR to track education, qualifications and skills of the employees, as well as outlining what training courses, books, CDs, web based learning or materials are available to develop which skills. Courses can then be offered in date specific sessions, with delegates and training resources being mapped and managed within the same system. Sophisticated LMS allow managers to approve training, budgets and calendars alongside performance management and appraisal metrics. The employee self-service module allows employees to query HR related data and perform some HR transactions over the system. Employees may query their attendance  record from the system without asking the information from HR personnel. The module also lets supervisors approve O.T. requests from their subordinates through the system without overloading the task on HR department. Many organizations have gone beyond the traditional functions and developed human resource management information systems, which support recruitment, selection, hiring, job placement, performance appraisals, employee benefit analysis, health, safety and security, while others integrate an outsourced applicant tracking system that encompasses a subset of the above. Assigning Responsibilities Communication between the Employees. The Analytics module enables organizations to extend the value of an HRMS implementation by extracting HR related data for use with other business intelligence platforms. For example, organizations combine HR metrics with other business data to identify trends and anomalies in headcount in order to better predict the impact of employee turnover on future output. Management of Employee Turnover and Employee Retention Employee retention refers to the ability of an organization to retain its employees. Employee retention can be represented by a simple statistic (for example, a retention rate of 80% usually indicates that an organization kept 80% of its employees in a given period). However, many consider employee retention as relating to the efforts by which employers attempt to retain employees in their workforce. In this sense, retention becomes the strategies rather than the outcome. A distinction should be drawn between low performing employees and top performers, and efforts to retain employees should be targeted at valuable, contributing employees. Employee turnover is a symptom of a deeper issue that has not been resolved. These deeper issues may include low employee morale, absence of a clear career path, lack of recognition, poor employee-manager relationships or many other issues. A lack of satisfaction and commitment to the organization can also cause an employee to withdraw and begin looking for other opportunities. Pay does not always play as large a role in inducing turnover as is typically believed. In a business setting, the goal of employers is usually to decrease employee turnover, thereby decreasing training costs, recruitment costs and loss of talent and organisational knowledge. By implementing lessons learned from  key organizational behavior concepts employers can improve retention rates and decrease the associated costs of high turnover. However, this isn’t always the case. Employers can seek â€Å"positive turnover† whereby they aim to maintain only those employees who they consider to be high performers. In human resources context, turnover or staff turnover or labour turnover is the rate at which an employer loses and gains employees. Simple ways to describe it are â€Å"how long employees tend to stay† or â€Å"the rate of traffic through the revolving door†. Turnover is measured for individual companies and for their industry as a whole. If an employer is said to have a high turnover relative to its competitors, it means that employees of that company have a shorter average tenure than those of other companies in the same industry. High turnover may be harmful to a company’s productivity if skilled workers are often leaving and the worker population contains a high percentage of novice workers. Companies also often track turnover internally across departments and divisions or other demographic groups such as turnover of women versus turnover of men. Retention Programs It is important to first pinpoint the root cause of the retention issue before implementing a program to address it. Once identified, a program can be tailored to meet the unique needs of the organization. A variety of programs exist to help increase employee retention. Career Development – It is important for employees to understand their career path within an organization to motivate them to remain in the organization to achieve their personal career goals. Through surveys, discussion and classroom instruction, employees can better understand their goals for personal development. With these developmental goals in mind, organizations can offer tailored career development opportunities to their employees. Executive Coaching – Executive coaching can be used to build competencies in leaders within an organization. Coaching can be useful in times of organizational change, to increase a leader’s effectiveness or to encourage managers to implement coaching techniques with peers and direct reports. The coaching process begins with an assessment of the individual’s strengths and opportunities for improvement. The issues are then prioritized and  interventions are delivered to target key weaknesses. Assistance is then provided to encourage repeated use of newly acquired skills. Motivating Across Generations – Today’s workforce includes a diverse population of employees from multiple generations. As each generation holds different expectations for the workplace, it is important to understand the differences between these generations regarding motivation and engagement. Managers, especially, must understand how to handle the differences among their direct re ports. Orientation and On Boarding – An employee’s perception of an organization takes shape during the first several days on the job. It is in the best interest of both the employee and the organization to impart knowledge about the company quickly and effectively to integrate the new employee into the workforce. By implementing an effective on boarding process, short-term turnover rates will decrease and productivity will increase. Women’s Retention Programs – Programs such as mentoring, leadership development and networking that are geared specifically toward women can help retain top talent and decrease turnover costs. By implementing programs to improve work/life balance, employees can be more engaged and productive while at work. Exit Interview and Separation Management Programs Retention tools and resources Employee Surveys – By surveying employees, organizations can gain insight into the motivation, engagement and satisfaction of their employees. It is important for organizations to understand the perspective of the employee in order to create programs targeting any particular issues that may impact employee retention. Exit Interviews – By including exit interviews in the process of employee separation, organizations can gain valuable insight into the workplace experience. Exit interviews allow the organization to understand the triggers of the employee’s desire to leave as well as the aspects of their work that they enjoyed. The organization can then use this information to make necessary changes to their company to retain top talent. Exit interviews must, however, ask the right questions and elicit honest responses from separating employees to be effective. Employee Retention Consultants – An employee retention consultant can assist organizations in the process of retaining top employees. Consultants can provide expertise on  how to best identify the issues within an organization that are related to turnover. Once identified, a consultant can suggest programs or organizational changes to address these issues and may also assist in the implementation of these programs or changes. Employee retention best practices By focusing on the fundamentals, organizations can go a long way towards building a high-retention workplace. Organizations can start by defining their culture and identifying the types of individuals that would thrive in that environment. Organizations should adhere to the fundamental new hire orientation and on boarding plans. Attracting and recruiting top talent requires time, resources and capital. However, these are all wasted if employees are not positioned to succeed within the company. Research has shown that an employee’s first 10 days are critical because the employee is still adjusting and getting acclimated to the organization. Companies retain good employees by being employers of choice. Recruitment- Presenting applicants with realistic job previews during the recruitment process have a positive effect on retaining new hires. Employers that are transparent about the positive and negative aspects of the job, as well as the challenges and expectations are positionin g themselves to recruit and retain stronger candidates. Selection- There are plethora of selection tools that can help predict job performance and subsequently retention. These include both subjective and objective methods and while organizations are accustomed to using more subjective tools such as interviews, application and resume evaluations, objective methods are increasing in popularity. For example, utilizing biographical data during selection can be an effective technique. Biodata empirically identifies life experiences that differentiate those who stay with an organization and those who quit. Life experiences associated with employees may include tenure on previous jobs, education experiences, and involvement and leadership in related work experiences. Socialization- Socialization practices delivered via a strategic onboarding and assimilation program can help new employees become embedded in the company and thus more likely to stay. Research has shown that socialization practices can help new hires become embedded in the company and thus more likely to stay. These practices include shared and individualized learning  experiences, activities that allow people to get to know one another. Such practices may include providing employees with a role model, mentor or trainer or providing timely and adequate feedback. Training and development- Providing ample training and development opportunities can discourage turnover by keeping employees satisfied and well-positioned for future growth opportunities. In fact, dissatisfaction with potential career development is one of the top three reasons employees (35%) often feel inclined to look elsewhere. if employees are not given opportunities to continually update their skills, they are more likely to leave. Those who receive more training are less likely to quite than those who receive little or no training. Employers that fear providing training will make their employees more marketable and thus increase turnover can offer job specific training, which is less transferable to other contexts. Additi onally, employers can increase retention through development opportunities such as allowing employees to further their education and reimbursing tuition for employees who remain with the company for a specified amount of time. Compensation and rewards- Pay levels and satisfaction are only modest predictors of an employee’s decision to leave the organization; however organizations can lead the market with a strong compensation and reward package as 53% of employees often look elsewhere because of poor compensation and benefits. Organizations can explicitly link rewards to retention (i.e. vacation hours to seniority, offer retention Bonus payments or Employee stock options, or define benefit plan payouts to years of services). Research has shown that defined compensation and rewards as associated with longer tenure. Additionally, organizations can also look to intrinsic rewards such as increased decision-making autonomy. Though this is important, employers should not Effective Leaders- An employee’s relationship with his/her immediately ranking supervisor or manager is equally important to keeping to making an employee feel embedded and valued within the organization. Supervisors need to know how to motivate their employees and reduce cost while building loyalty in their key people. Managers need to reinforce employee productivity and open communication, to coach employees and provide meaningful feedback and inspire employees to work as an effective team. In order to achieve this, organizations need to prepare managers and supervisors to lead and develop effective relationships  with their subordinates. Executive Coaching can help increase an individual’s effectiveness as a leader as well as boast a climate of learning, trust and teamwork in an organization. To encourage supervisors to focus on retention among their teams, organizations can incorporate a retention metric into their organization’s evaluation. Employee Engagement- Employees who are satisfied with their jobs, enjoy their work and the organization, believe their job to be more important, take pride in the company and feel their contributions are impactful are five times less likely to quit than employees who were not engaged. Engaged employees give their companies crucial competitive advantages, including higher productivity and lower employee turnover.

Thursday, January 2, 2020

Introduction to the Conditions and Categories of Price Discrimination

On a general level, price discrimination refers to the practice of charging different prices to different consumers or groups of consumers without a corresponding difference in the cost of providing a good or service. Conditions Necessary for Price Discrimination In order to be able to price discriminate among consumers, a firm must have some market power and not operate in a perfectly competitive market. More specifically, a firm must be the only producer of the particular good or service that it provides. (Note that, strictly speaking, this condition requires that a producer be a monopolist, but the product differentiation present under monopolistic competition could allow for some price discrimination as well.) If this were not the case, firms would have an incentive to compete by undercutting competitors prices to the high-priced consumer groups, and price discrimination would not be able to be sustained. If a producer wants to discriminate on  price, it must also be the case that resale markets for the producers output do not exist. If consumers could resell the firms output, then consumers who are offered low prices under price discrimination could resell to consumers who are offered higher prices, and the benefits of price discrimination to the producer would vanish. Types of Price Discrimination Not all price discrimination is the same, and economists generally organize price discrimination into three separate categories. First-Degree Price Discrimination: First-degree price discrimination exists when a producer charges each individual his or her full willingness to pay for a good or service. It is also referred to as perfect price discrimination, and it can be difficult to implement because its generally not obvious what each individuals willingness to pay is. Second-Degree Price Discrimination: Second-degree price discrimination exists when a firm charges different prices per unit for different quantities of output. Second-degree price discrimination usually results in lower prices for customers buying larger quantities of a good and vice versa. Third-Degree Price Discrimination: Third-degree price discrimination exists when a firm offers different prices to different identifiable groups of consumers. Examples of third-degree price discrimination include student discounts, senior citizen discounts, and so on. In general, groups with higher price elasticity of demand are charged lower prices than other groups under third-degree price discrimination and vice versa. While it may seem counterintuitive, it is possible that the ability to price discriminate actually reduces the inefficiency that is a result of monopolistic behavior. This is because price discrimination enables a firm to increase output and offer lower prices to some customers, whereas a monopolist might not be willing to lower prices and increase output otherwise if it had to lower the price to all consumers.