Menu
Log in

Log in
  • 05 Feb 2024 11:21 AM | Anonymous

    Interview with Pierre Sarkis: How is Pierre Sarkis unlocking the power of HR data?

    Human resources data plays a pivotal role in modern organizations, serving as the lifeblood of informed decision-making and strategic planning. The importance of HR data lies in its ability to provide valuable insights into employee engagement, talent management, and organizational culture. By analyzing this data, HR professionals and business leaders can make data-driven decisions that enhance employee satisfaction, optimize resource allocation, and ultimately drive the success of the organization. 

    Hence, comes Side

    Portrayed as a good substitute for spreadsheet in HR reporting, this platform grants registered HR managers access to a wide array of data, including headcount evolution, male/female ratio, talent acquisition funnel, average tenure by department and much more. In this Q&A, Pierre Sarkis, the CEO and Founder of Side, provides an in-depth exploration of the product and elucidates how HR managers can fully leverage its capabilities. 


    When was Side launched?
    We launched Side in September 2023, serving mid-sized companies looking to improve their people's decisions and digitize their HR functions. Our solution is designed to be both powerful and accessible, allowing organizations to achieve these goals without requiring substantial investments or embarking on a full software migration.

    How can HR professionals utilize it, and what specific insights does it provide?
    Side offers out-of-the-box templates or customized dashboards for visualizing and reporting key HR metrics. We think of our tool as a starter kit for organizations seeking to leverage their people's data for decision-making around the workforce while spending minimal time and resources on data analytics infrastructure that often falls outside of HR departments’ domain of expertise. We've prioritized accessibility by seamlessly integrating with existing data systems or spreadsheets, automatically unlocking a wide range of insights related to demographics, diversity, turnover, recruitment and more.

    What type of HR data do you work with?
    For the moment our metrics and dashboards are integrated with current workforce and recruitment data. However, we plan to expand our capabilities to encompass other types of HR data, such as performance metrics, training records, and survey results.

    What kind of informed decision-making does the platform enable for HR professionals?
    Our platform centralizes HR data, offering improved data visualization and real-time updates without the need for manual intervention. We’ve found that oftentimes getting access to aggregated metrics enables decision makers to see immediate bottlenecks and gaps in their workforce. With our tool, we anticipate that HR professionals can better understand their workforce distribution with regard to departmental segmentation, diversity and inclusion, as well as recruiting needs and shortcomings in employee retention.

    What is your revenue model? (per company or registered employee?)
    Our revenue model is based on a monthly subscription system per company size, as well as individually priced customized services upon request.

    How large is your team?
    While I am the sole founder, I benefit from the expertise of Mohammad Jouni as our technical advisor, the assistance of a freelance full-stack engineer, and the valuable guidance in HR analytics and data science from Karen Bouez. Nevertheless, we anticipate expanding our team in the near future.

    How many clients do you have on board now?
    Side currently serves five customers ranging from an 80-employee company to a 1000-employee enterprise, spanning diverse sectors including IT, education, and distribution.

    What challenges are you facing at the moment?
    We are facing the challenge of integrating with various data sources, but we are actively addressing this by building the integration capabilities progressively. On the sales front, we are encountering extended sales cycles; however, we are proactively managing this by implementing free trials to enhance adoption rates.

    Are you bootstrapping or looking for funding?
    We are currently bootstrapping and do not have any plans for fundraising at this time.

    Where do you see the company in the next 5 years? 

    In the next five years, we envision Side as a leader in accessible people analytics, providing a holistic solution for HR teams to optimize their people-related strategies. Additionally, we plan to extend our services to include business intelligence as a service, assisting companies that may lack the resources to develop tailored dashboards for tracking KPIs not only within HR but also across broader business functions.


    Pierre's Bio: 

    Pierre Sarkis grew up in Lebanon where he started his academic journey at the American University of Beirut before moving to France in 2015 to pursue a master's in management at HEC Paris. During his academic years and early professional career, Pierre discovered a passion for launching small side businesses. From e-book selling to creating a hat brand, he embraced the challenges, learning valuable lessons along the way. The process of building and growing these ventures fueled his enthusiasm, highlighting his love for continual learning. His trajectory took a significant turn when he joined Amazon as a Product Manager in Paris. Here, he deepened his interest in developing and deploying internal tools to empower data-driven decision-making. This experience became pivotal in shaping Pierre's approach to leveraging technology for efficient decision support.

    Seeking new horizons, Pierre relocated to New York City within Amazon, where he continued to grow in the company until halfway through 2023. Driven by an entrepreneurial spirit and a desire to make a meaningful impact, he made the bold decision to leave Amazon and create his own company Side”

    With Side, Pierre aims to assist businesses, particularly HR teams facing resource constraints and a lack of suitable tools. His vision is to empower organizations to make informed decisions for their people, exploiting the wealth of underutilized data at their disposal.

  • 08 Jan 2024 11:38 AM | Anonymous

    Author: This article is part of LebNet’s expert series written by Dimitri J. Stephanou, a former CIO and Managing partner of Pekasso Group; a Management Consulting boutique firm that provides advisory services in digital transformation, operational excellence, strategic planning, IT organization assessment and restructuring, data domain definition and strategy, service design, systems implementation and cloud migration, and M&A integration support, 

     He holds an MS in electrical & computer engineering from George Mason University, and is certified in ITIL® Service Management Foundation.A dynamic business innovator, Dimitri is the sole inventor on several patents in the technology and SaaS space, served as an independent judge for SAP, and is the winner of the Gold and Bronze SAP ERP Awards.

    Digital transformation is the integration of technology into every business process, fundamentally automating business operations across the enterprise and delivering on the business value proposition communicated to customers and partners. It's also a cultural change that requires organizations to disrupt the established status quo, innovate and continuously improve in an agile manner.

    Operational Excellence (OpEx) is the discipline of optimizing the 3 pillars of a business—namely people, process, and technology—to innovate the business and deliver exceptional customer experience (CX), while achieving significant improvement to the top and bottom lines. Optimizing this triad of people, process, and technology must be done in close alignment with an agile business strategy focused on agile, innovative digital transformation.

    Borrowed from software development, being “agile” enables the business to implement incremental innovation within several “sprints,” where each sprint is reviewed and tested while maintaining momentum across the larger strategic digital transformation plan. A sprint is, thus, a time-boxed iteration within the continuous improvement and transformation cycle that allows a business to validate the work done in that iteration and fix any deficiencies or improve on the work done in subsequent sprints. An agile business can, thus, respond quicker and more effectively to opportunities and threats found in its internal and external environments.

    The agile methodology allows the business to learn from mistakes much earlier in the digital transformation cycle, thus saving time and money and, more importantly, improving the quality of the final product or service that the business creates. Consequently, an agile business is customer-centric, with the goal of maximizing customer experience in every touchpoint on the customer journey.

    Strategy serves as the compass for the organization and incorporates the long-term purpose of its actions. Agility suggests a fundamental flexibility to experiment and iterate at the tactical level. When both are incorporated into the business fabric as an OpEx framework, the result is a continuous improvement and innovative transformation culture that positions the business to become an industry disruptor rather than run the risk of being disrupted by more innovative, agile competitors.

    It is this balancing act between strategy, organizational agility, and innovation that enables effective digital transformation and operational excellence. By balancing all three effectively, an organization can prioritize its project streams and associated sprints, allocate the right talent on those streams, streamline, and automate enterprise processes, and innovate the business model through introduction of new technologies and systems.

    Innovation versus business as usual

    It is very important to separate innovative transformation plans from the plans that help a business do things better. Both are necessary for it to become an industry disruptor, and both should be executed and measured apart from one another. Even so, separating those plans doesn’t mean losing holistic visibility at a strategic level on how those plans work together and feed each other to achieve the common goals of superior customer experience and long-term sustainable business growth.

    A successful agile OpEx framework is one that enables this holistic visibility and measurement of progress on the above mentioned two types of plans vis-à-vis the corporate strategy through a unified management system. Such a system allows for a more effective quarterly prioritization of objectives and measurement of key results. More importantly, it allows for agility on the tactical level, enabling management to adjust the previously agreed quarterly objectives, replacing them, or introducing new ones.

    Benefits of such a system include:

    1. Improved customer experience

    2. Efficient use of resources

    3. Process optimization and elimination of waste

    4. Faster learning cycles

    5. Lower defect rates

    6. Creation of shareholder value

    7. Higher employee engagement

    8. Closer collaboration with partners and suppliers

    A proven example of such a unified management system is the Objectives and Key Results (OKRs) framework used for many years by successful companies, including Google, Amazon, Spotify, LinkedIn, and Intel. The objectives (O) are typically stretched, while the key results (KR) are quantifiable and measurable as a percentage of completeness. The OKRs framework is typically limited to the formulation of five key objectives, each with a handful of key results. By limiting the number of objectives, the system forces the organization to select and focus on the most critical ones. The quarterly review process provides the mechanism to make changes if and when changes are deemed needed.

    The OKRs quarterly management system thus ensures that all tasks from each level of the organization are feeding into the operational and innovative transformation objectives, which, in turn, feed into the overall strategy of the organization. This agile system enables all staff to know clearly how their specific goals are contributing to the company’s overarching strategic plan and provides the holistic visibility needed to both tactically and strategically execute well, thus achieving Operational Excellence.

  • 20 Jun 2022 9:23 AM | Anonymous member

    Author: This article is part of an expert series written by Hicham Ghoussein, CEO and Founder at Endeavor Composites, Inc. Ghossein earned his Ph.D. in 2018 from The University of Tennessee, Knoxville. He served as an entrepreneurial fellow at the Innovation Crossroads Program at Oak Ridge National Laboratory, where he worked on scaling up and commercializing a carbon fiber nonwoven technology that allowed the production of turnkey preform solutions for the advanced composites manufacturing. He was awarded The Heart of Smoke and Fire challenge coin by Scot Forge Space Program in 2016, The Eisenhower School of Defense - Advanced Manufacturing challenge coin and The Secretary of The Army - Civilian Aide challenge coin in 2018. 

    The advancement of artificial intelligence, computational capacity, and improved sources of power for transportation have created a paradigm similar to the discovery of the steam engine that led to the first industrial revolution. In the automotive industry, this evolution can be witnessed in the electrification of fleets and the push to expand vehicle range. Similar efforts can be seen in other transportation industries. Even the aerospace community is buzzing with efforts to develop eVTOL (Electric Vertical Take-off and Landing) as a potential future model of urban transportation. 

    Many will attribute these advancements to the enhancement of output for electrical batteries and electric engines. However, an evolution is also happening behind the scenes with the building blocks of these vehicles. Original Equipment Manufacturer (OEMs) realized that reducing the weight of their products would allow them to extend the range of their vehicles given the same amount of energy. To do so, many are experimenting and investigating the usage of fiber-reinforced polymer (FRP) composite materials in place of aluminum and metal alloys. With its high strength to weight ratio, FRP has been the material of choice for high-end vehicles and jet fighters. Today, innovation efforts are underway to manufacture it with cost efficiency that meets the needs of everyday vehicles. A lighter vehicle yields lower emissions, hence making the vehicle more environmentally friendly.

    A composite, as the name suggests, is the combination of two or more materials that carries the properties and benefits of its components. These components are defined as reinforcement and matrix. The reinforcement is the element that provides much of the strength to the material. The matrix encapsulates the reinforcement and provides the first contact with the surrounding elements. A perfect example, even though it is not a polymeric matrix, is reinforced concrete, where the concrete acts as matrix and the metallic rebars act as reinforcement. The interest in FRP stems from the lightweight nature of the materials that rivals that of aluminum and metal Alloys. Additionally, most FRP are recyclable or at least have reinforcing fibers that can be reclaimed. The flow chart in Figure 1 details the categories of FRP based on the type of reinforcement. These categories are defined by the ratio of fiber length to its diameter, known as the Aspect ratio. 

    Figure 1: reinforcement categories flow chart

    The Reinforcements: 

    Continuous reinforcements are first to come to mind when talking about composites. They have the iconic weave pattern seen in Figure 2; they add elegance and reflect order in the design on top of their high performance. Continuous reinforcements are used in areas of structural load and as encapsulating shells. They are limited in manufacturing methods, as they have proven challenging when processing complex shape parts with deep draw or angles. The 


    Figure 2: Typical weave pattern for carbon fiber reinforcement. 

    Next comes the discontinuous reinforcement, mainly present in injection molding, extrusion compression molding and 3D printing techniques. A discontinuous reinforcement is when the fiber length is shorter than the part length. Recently efforts to produce non-woven fabrics with discontinuous reinforcements have gained traction in the market. Discontinuous RFPs are desired for complex shape parts, due to their high formability and ease of production. The fibers can be man-made such as carbon, glass, aramid, or basalt. But they can also be natural, such as bamboo, flax, hemp, jute, coir, or banana. Natural fibers are gaining traction recently due to their inherent sustainability and affordability. These different materials all have their niche. Man-made fibers remain dominant in high performance applications. Carbon fibers add stiffness and rigidity to the composite, while glass improves impact and insulation. Aramids are great in ballistic protection in their woven format, and in skid resistance in discontinuous format. 

    The reinforcement role, like the name indicates, is to carry the load applied to the material and resist deformation. In a well fabricated part with perfect bonding between matrix and reinforcement, the latter will be first to fail, this is why the choice of reinforcement type and quantity is highly dependent on the application itself. Figure 3 shows the connection between the mechanical properties and the length of the reinforcing fibers. 

    Figure 3: Relative property of a composite in relation to the fiber length in the reinforcement. The graph shows an increase in mechanical properties of the composite in relation to the fiber length, while processability decreases in relation to fiber length. The modulus of a material indicates its stiffness and ability to maintain its shape under constant load. Strength indicates the highest load a material can withstand before deformation and Impact is the maximum resistance to a blunt force damage that the material can take. Processability indicates the ease of use of the material in different manufacturing techniques. 

    Photo credit: Nguyen B.N. et al., long fiber thermoplastic injection molded composites: From process modeling to property prediction, SPE Automotive Composites Conference and Exposition, Troy, MI, CD-ROM Proceedings, 2005.

    The Matrices:

    Polymeric matrices fall under two categories, Thermoset and Thermoplastic. The thermosets are crosslinked polymers, and often require a catalyst to initiate the crosslinking reaction. The chemical reaction is directly dependent on temperature, and once crosslinking occurs the matrix is set and non-reversible. Its most famous products are the epoxy groups. High performance parts are made from a prepreg preform, where the thermoset is already introduced to the fabric in a tacky state and as heat is applied the reaction accelerates to finalize the crosslinking process in a mold. This technique is mainly used in the aerospace industry and yields parts with up to 50% fibers by volume. Other methods of fabrication include resin transfer molding (RTM) and Vacuum Assisted Resin Transfer Molding (VARTM). These techniques are mostly used in the automotive and marine industries thanks to their low cost and ease of production. Finally, sheet molding compound (SMC) is one of the most processed materials in transportation due to its ability to form a part in less than a minute. SMC is made from a putty of discontinuous fibers and resin, that gets squeezed in a heated metallic mold. 

    Thermoplastics become pliable with the application of heat. Hence, they can be softened and formed into a desired shape. Their polymeric chains do not form a crosslinking bond, allowing for fluidity once brought to a temperature above their melting point. Thermoplastics can be made from commodity plastics, like polypropylene (PP), polyethylene (PE), or polyvinyl chloride (PVC). They are produced in high volumes for consumer applications where high mechanical performance is not required. Another category of thermoplastics is engineering plastics that are designed to withstand different mechanical and environmental conditions. These engineered plastics can be found in high end applications such the eVTOL airplanes. 

    The Market:

    The composites market is expected to grow at a compound annual growth rate of 6.6% from 2019 to 2028, going from $89.04 billion to $144.5 billion. This is due to the increase in demand for performance materials in various industries such as automotive and transportation; wind energy; and aerospace and defense. The market is made up of a few key players holding most of the market share, yet the industry is seeing a rise in the number of new startups and companies that are bringing major manufacturing innovation and addressing supply chain and environmental challenges. 

    Chart, bar chart Description automatically generated
    Figure 3: photo source (Statista) 

    For years, the supply chain lacked visibility and its organization entropy built up to cultivate a disaster spurred by the pandemic. Shipment delays and reliance on international supplies proved challenging. Today, the industry is evaluating new suppliers, digitizing its supply chain and relying more on recycled materials when possible to reduce and reuse, limiting its dependencies on foreign supplies. An assessment of the manufacturing ecosystems will lead to producing raw materials closer to the source in every continent, triggering the establishment of new refineries as well as growth of new crops for natural fibers and bio resins supplies. The shipping industry will have to pivot and prepare for a change in demand as a consequence of the operational changes large corporations are undertaking as they adopt these new materials. 

    As a closing note, the world is pivoting towards a new era: The era of advanced composites. This relatively new class of material is booming now, and will reshape the industry. The cementing fact of this growth is the speed of adoption following the pandemic. Corporations’ demand for lighter, faster, and stronger materials redefined their supply chain structure and set them back to work towards a brighter future.

  • 20 May 2022 9:47 AM | Anonymous member

    Author: This article is part of an expert series written by Ned Taleb, Co-Founder and CEO at several companies including B-Yond, a company providing AI-powered network automation solutions; Reailize, a business solutions company; Co-Founder at Nexius, a company providing innovative end-to-end deployment services and smart solutions on the latest technologies among others. Taleb was named EY Entrepreneur of the Year in 2014 and teaches Entrepreneurship at IE Business School. 

    [Disclaimer: The below article is the author's personal opinion]

    I am not here to tell you ‘Hey, Lebanon is now cheap and therefore an opportunity for outsourcing”. I am here to tell you that my brother and I have done that for 13 years, and there has never been a better time to do so and I am writing this article today to share the pros but also the cons of working with outsourcing companies in Lebanon so you as a Lebanese professional living outside of Lebanon can judge if it’s right for you.

    I was approached recently by an ex-colleague who does business in Brazil about outsourcing to Lebanon. Unfortunately, following a financial crisis, low labor costs can attract interest from businesses looking to outsource work. Still, I appreciated his interest in our beloved Lebanon.

    In a similar context, 20 years ago, I landed in Argentina for the first time. The country had just defaulted the year before under the rule of Carlos Menem. The Peso went from 1:1 with the USD to 3:1. The Lebanese crisis today has many similarities with the Argentine one. I also saw an opportunity for outsourcing to Argentina then; you could hire some people back then for 500 pesos per month or USD $167, and for the following 10 years it was a great low-cost hub. The following last decade, and despite the Pesos continuing its devaluation, inflation in Argentina has pushed the cost back up. Today, the Argentineans are adapting to reality; demanding USD salaries as their trust in their currency is close to nil.

    [Photo credit: Linkedin]

    The case for outsourcing to Lebanon is simple

    Lebanon has top schools and a history of training top scientists and engineers. Lebanese are multilingual, street smart, practical and have an entrepreneurial mindset. Cost has been competitive even before the October 17th revolution (comparable to traditional outsourcing locations such as India or Argentina). Costs have become somewhat cheaper, but the global shortage of technical professionals does mean that many can command a decent salary in USD. 

    Challenges? Infrastructure is the top concern 

    Almost every company I tried to convince to outsource jobs to Lebanon complained about the infrastructure. Our failed political system has ruined the infrastructure from electricity to internet connectivity, and there is no light yet at the end of the tunnel. Another concern is that the country’s financial system is not the easiest to navigate and paying through bitcoin is not a scalable model yet. You need to understand how social security (‘Daman’) and terminations work.

    In order for Lebanese companies to increase their competitiveness globally, here are four actions they can take, inspired by our work with Beirut-based outsourcing company Novelus:

    1- Have at least two or three backup internet connections
    2- Have your own power generators to avoid long electricity outages
    3- Set up European or other international bank accounts from which you can bill clients
    4- Recruit top talents who are aligned with the company’s culture and handle talent acquisition, on-boarding, compensation and benefits.

    During our work with Novelus, we only handled technical interviews and gave the final green light for a candidate. The teams assigned to us are co-located and live our core values. In a world of WFH, interactions are seamless. We have lower cost yet exceptional talent, and our retention rate is close to perfect. We have hundreds of US-trained team members and a team based in Beirut. 

    Give our Lebanese talent much needed opportunities 

    To close off this article, let me tell you that after founding many companies, and like many successful Lebanese that are members of LebNet, nothing gives me more happiness or sense of purpose than giving opportunities to young Lebanese, some graduating in a pandemic, with minimal job opportunities abroad, and in a country brought down to its knees. That’s why I am a massive believer in LebNet’s mission, among other organizations such as Jobs for Lebanon, that are facilitating a great number of job opportunities for Lebanese.

    Looking at the outsourcing tech companies in Lebanon, there’s only a handful and the current leveraged talent is a few thousand people at best. The potential is ten or twenty times when you consider the thousands of engineers and software people graduating per year. This is the fastest positive impact we can bring to Lebanon, and I hope the Lebanese Diaspora continues to investigate how to give back to Lebanon.

  • 25 Oct 2021 8:30 AM | Anonymous member

    Author: This article is part of an expert series written by Dr. Charbel Rizk, the Founder & CEO of Oculi® - a spinout from Johns Hopkins - a fabless semiconductor startup commercializing patented technology to address the huge inefficiencies with vision technology. In this article, Dr. Rizk discusses the hypothesis that he and his team have developed: Efficient Vision Intelligence (VI) is a prerequisite for effective Artificial Intelligence (AI) for edge applications and beyond. 

    Despite astronomical advances, human vision remains superior to machine vision and is still our inspiration. The eye is a critical component, which explains the predominance of cameras in AI. With mega-pixels of resolution and trillions of operations per second (TOPS), one would expect vision architecture (camera + computer) today to be on par with human vision. However, current technology is as high as 40,000x behind, particularly in terms of efficiency. It is the combination of the time and energy “wasted” in extracting the required information from the captured signal that is to blame for this inefficiency. This creates a fundamental tradeoff between time and energy, and most solutions optimize one at the expense of the other. 

    We remain a far cry from replicating the efficacy and speed of human vision. So what is the problem? The answer is surprisingly simple: 

    1. Cameras and processors operate very differently relative to the human eye and brain, largely because they were historically developed for different purposes. Cameras were built for accurate communication and reproduction. Processors have evolved over time with the primary performance measure being operations per second. The latest trend is domain specific architecture (i.e. custom chips), driven by demand from applications which may see benefit in specialized implementations such as image processing. 

    2. Another important disconnect, albeit less obvious, is the architecture itself. When a solution is developed from existing components (i.e. off-the-self cameras and processors), it becomes difficult to integrate into a flexible solution and more importantly dynamically optimize in real-time, a key aspect of human vision.

    Machine versus Human Vision

    To compare, we need to first examine the eyes and brain and the architecture connecting them. 

    The eye has ~100x more resolution, and if it were operated like a camera it would transfer ~600 Gb/s to the brain. However, the eye-brain “data link” has a maximum capacity of 10 Mbits/sec. So how does it work? The answer is again simple: eyes are specialized sensors which extract and transfer only the “relevant” information (vision intelligence), rather than taking snapshots or videos to store or send to the brain. While cameras are mostly light detectors, the eyes are sophisticated analysts, processing and extracting clues. This sparse but high-yield data is received by the brain for additional processing and eventual perception. Reliable and rapid answers to: What is it?Where is it? and eventually, What does it mean? are the goals of all the processing. The first two questions are largely answered within the eye. The last is answered in the brain. Finally, an important element in efficiency is the communication architecture itself. The eye and the brain are rarely performing the same function at any given moment. There are signals from the brain back to the eye that allow the two organs to continuously optimize and focus on the task at hand. 

    Efficient Vision Intelligence (VI) is a prerequisite for effective Artificial Intelligence (AI) for edge applications 

    Everyone is familiar with the term Artificial Intelligence, but what is Vision Intelligence (VI)?

    It accurately describes the output of an efficient and truly smart vision sensor like the eye. One that intelligently and efficiently selects and transfers relevant data at a sustainable bandwidth. Biology demonstrates that the eye does a good deal of parallel pre-processing to identify and discard noise (data irrelevant to the task at hand), transferring only essential information. A processing platform that equals the brain is an important step in matching human perception, but not sufficient to achieve human vision without “eye-like” sensors. In the world of vision technology, the human eye represents the power and effectiveness of true edge processing and dynamic sensor optimization.   

    Efficient Vision Technology is safer and preserves energy  

    As the world of automation grows exponentially and the demand for imaging sensors skyrockets (cameras being the forerunners with LiDars and radars around the corner), vision technology which is efficient in resources (photon collection, decision time, and power consumption) becomes even more critical to safety and to saving energy. 

    On safety, a vivid example would be pedestrian detection systems, a critical safety function ripe for an autonomous capability, but current deployed solutions have limited effectiveness. To highlight the challenges with conventional sensors, consider cameras running at 30 frames (or images) per second (fps). That corresponds to a delay of 33 ms to get one image and many are usually required. To get 5 images, the vehicle at 45 mph would have traveled the length of a football field. That “capture” delay can be reduced with increasing the camera speed (more images per second) but that creates other challenges in sensor sensitivity and/or system complexity. In addition, night time operation presents its own unique challenges and those challenges increase with the sampling speed.

    Real-time processing would also be necessary to not add more delay to the system. Two HD cameras generate about 2 Gbits/sec. This data rate, when combined with the associated memory and processing, causes the overall power consumption for real-time applications to become significant. Some may assume that a vehicle has an unlimited energy supply. But often that is not the case. In fact, some fossil fuel vehicle companies are having to upsize their vehicles’ engines due to the increased electric power consumption associated with ADAS. Moreover, with the world moving towards electric vehicles, every watt counts.  

    If we were to think beyond our edge applications and look at the power cost of inefficient vision technology in general, the findings may surprise the reader. Recent studies estimate that a single email costs 4 grams of CO2 emission and 50g if it includes a picture, which is exactly the problem with vision technology today. It produces too much data. If we consider a typical vision system (camera+network+storage+processing) and assume, conservatively, a total power consumption of 5 Watts and that roughly 1 billion cameras are on at any given time, this translates to a total power consumption of 44 Terawatt-hr/yr. This is more than 163 out of 218 countries and territories, or mid-way between the power consumption of Massachusetts and Nevada. In the age of data centers,  images, and videos, “electronics” will soon become the dominant energy consumers and sources of carbon emissions in the future.

    Machine vision is never about capturing pretty pictures but it needs to generate the “best” actionable information very efficiently from the available signal (photons). What this means is optimizing the architecture on edge applications, which by nature, are resource constrained. This is exactly what nature provided in human vision. Biological organs such as the human eyes and brain operate at performance levels set by fundamental physical limits, under severe constraints of size, weight, and energy resources—the same constraints that tomorrow’s edge solutions have to meet. 

    There is significant room for improvement still by simply optimizing the architecture of current machine vision applications, in particular the signal processing chain from capture to action, and human vision is a perfect example of what’s possible. Before the world jumps to adding additional sensors to the mix, the focus should be on structuring the system in an optimal way to allow for the power of machine vision to approach that of human vision. 

  • 21 Jun 2021 2:30 AM | Anonymous member

    Author: This article is part of an expert series written by Fadi Daou, the CEO of MultiLane – a high speed test and measurement (T&M) company based in Lebanon. Daou discusses the move from 400G to 800G Ethernet at the leading edge of data communication, the  challenges and solutions at these high speeds and throughputs, and  Lebanon’s role in the industry.  

    [Disclaimer: The below article is the author's personal opinion]

    The COVID-19 pandemic has accelerated the already dramatic shift to online spaces in every aspect of our lives. From Netflix streaming, to Zoom calls, to sharing documents on Office365 or Google Docs, we are now using more bandwidth than ever. The speed at which data centers can communicate internally correlates directly to how fast they can provide their services. Increasing speeds at the user end – with 5G  or faster home WiFi for example – requires exponentially faster speeds at the server end. To  accommodate for this ever-increasing demand, hyperscalers – like Google, Amazon, and Microsoft – are constantly working on faster, more efficient technologies. 

    Data centers operate on the most fundamental layer of the internet, the physical layer, which deals  directly with streams of bits – 1s and 0s – transferred via electrical, or, more frequently, optical signals. On this layer, different technologies that enable the rapid transfer of data come together under the  Ethernet Specification.  

    The year 2021 has seen widespread adoption of 400 Gigabit Ethernet (400G), currently the fastest commercially available means of transferring data. Companies like Microsoft are migrating their data center infrastructure from 100G to 400G in anticipation of increasing bandwidth usage over the next five years, but other hyperscalers are pushing to go even faster.  

    At the leading-edge of data communications, we must always operate in anticipation of future technologies that may be two, three, and even four years in advance of what is available now. If 400G is being adopted now, then it is a certainty that the next stage, 800G Ethernet, is no longer a technology of the future but of the present, with prototypes and standards already in development.  

    The rapid approach of 800G Ethernet is all the more certain given that 400G relies on revolutionary new technology, which has laid the foundation for 800G and beyond. As a full breakdown of these technologies is outside the  purview of this article, I will focus on two factors that are central to this revolutionary shift in data communications: the move from NRZ to PAM4 signaling and the need for heat dissipation.  

    A Tale of Two Signals

    Gigabit Ethernet works by sending one or more signals through a fiber optic cable at a certain speed. Previous generations would send information one bit at a time through a Non-Return to Zero (NRZ) signal. At 100G and below, NRZ is ideal as it can allow for error-free transfer of data. However, NRZ signals cannot  provide reliable throughput at 400G, which has caused a shift to, and heavy reliance on, a different type of signaling method: PAM4. PAM4 splits a single signal into four levels, each of which can transfer two bits per cycle. This allows for faster transfer of information and less interference at 400G and above.

    But PAM4 isn’t without its own challenges. 

    Such a busy signal means that noise, and therefore errors – instances where a 1 is interpreted as a 0 and vice versa – are inevitable. Companies looking to implement  400G and above must be able to have a keen awareness of what these errors are and how to account for them. Testing equipment, like Bit Error Rate Testers (BERTs), advanced oscilloscopes, and loopback modules are crucial as they ensure data center functionality. Modern testing instruments can even apply methods of mitigating errors directly to see how they might be implemented in the field. For PAM4 signals, the most common way to ensure no information is lost is Forward Error Correction (FEC), which appends the data with additional bits and codewords that allow for a certain amount of data recovery even with errors in the signal.  

    Keeping Things Cool

    Processing so much information at a time causes significant heat buildup in the pluggable modules, which, if improperly dealt with, can damage the equipment. Heat dissipation through these modules is, therefore, essential to the functioning of the entire system. Here, test and measurement equipment once again plays a vital role. The interconnects used to stress-test ports or systems, called loopbacks, run ports at their highest power threshold to see how they cope, and what additional cooling methods are required to allow for more effective heat dissipation.

    (Data Center Rack Being Tested)

    Connecting to Lebanon

    My expertise is in test and measurement, but my passion has always been to see my country thrive. 30 years ago, I promised my father amongst the olive and pine trees of my ancestral village that I would return to Lebanon when I was ready to create high tech jobs for my fellow Lebanese. Lebanon has never lacked for talent, only opportunity, and my goal is to bring these opportunities home. 

    Shifting Lebanon’s economic focus from internal to international would go a long way to solving our current crisis. All that is needed is a better ecosystem that enables global competition and moves our economy to double digit growth. Updates to our labor laws would prove very helpful, as it would provide proper incentivization for international companies. However even without them, Lebanon is still rising to the occasion remarkably, all things considered. 

    Initiatives like my own Houmal Technology Park (HTP) already stand as a testament to Lebanon’s capacity to compete on an international scale. Even in the midst of economic turmoil, bright young Lebanese are working to turn their country into a hub for the ICT industry. One of the companies headquartered at HTP,  MultiLane, is able to keep pace even with the lightning fast high-speed I/O industry, with an excess of 4000 interconnect modules shipped every week. Test and measurement instruments manufactured right here are being used in major data centers around the world. 

    Looking to the future, if our work continues as it has, I anticipate and continue to strive to create even greater growth and more local opportunities as the world starts to take notice of Lebanon’s untapped potential.

  • 16 Apr 2021 4:45 AM | Anonymous member

    Author: Ali Khayrallah has been working away at the G’s of mobile for many years. He leads a research team shaping the future of mobile technology at Ericsson in Santa Clara. He is currently focused on 6G efforts in the US with industry, academia and government.

    [Disclaimer: The below article is the author's personal opinion]

    Just as the main operators in North America are completing the first wave of 5G network rollouts and 5G phones are becoming mainstream, we are starting to hear about 6G (or Next G, or whatever name sticks eventually). 

    Why so soon and what will it do for us? This article will try to give you a glimpse of some answers.

    The long game

    History doesn’t quite repeat itself but it kind of rhymes. Each ‘G’ (for generation) of mobile from 2G to 4G has lasted about 10 years, and it seems 5G will too. So we can guess that the 6G era will start around 2030. What is less obvious to the general public is that the buildup also takes a decade, so the time to start working on 6G is now. As you will come to appreciate, this is truly a long game from early research to commercial deployment on a global scale. Each new G offers an opportunity for radical changes, unconstrained by necessary compatibility within a single generation. To get there, we need time: to do the research and mature the technologies that potentially drive changes; to integrate them into complex systems and figure out ways to exploit their potential; to reduce them to practice and understand their feasibility; to create standards that incorporate them; to design products and services based on those standards; and finally to deploy networks.

    I will first talk about what 6G is about then discuss how to get there, in particular standards and spectrum, as well as geopolitical factors that may help or hinder us.


    Photo credit

    6G: use case and benefits

    It is of course difficult today to pin down the technologies that will enable 6G networks or the use cases that will drive the need for them, but we can paint a big picture of where we might be headed. 

    We expect the trend towards better performance in customary metrics such as capacity, bit rate, latency, coverage and energy efficiency to continue, as it has in previous G’s. To that end, we foresee further improvements in workhorse technologies such as multi-antenna transmission and reception, in particular more coordination of transmissions across sites. Also, the insatiable appetite for more spectrum will continue to lead us to ever higher frequencies, into the low 100’s of GHz. The need for ubiquitous coverage will push for integration of non-terrestrial nodes such as drones and low earth orbiting satellites into terrestrial networks. The success of these various directions hinges on solving a wide array of tough technical problems.

    Networks will also need to evolve in other ways, such as trustworthiness, which entails the network’s ability to withstand attacks and recover from them. One aspect is confidentiality, which goes beyond protection of data during transmission to secure computation and storage. Another aspect is service availability, which requires resilience to node failure and automated recovery.

    We can also think of use cases that will create the demand for 6G. One use case is the internet of senses, where we expect the trend from smartphones to AR/VR devices and beyond that involve most of our senses, leading to a merge of the physical and virtual worlds and putting very tough latency and bit rate requirements on the network. Another use case is very simple and possibly battery-less devices such as sensors and actuators for home automation, asset tracking, traffic control etc. Such devices must be accommodated by the network with appropriate protocols. Yet another is intelligent machines, where the network provides specialized connectivity among AI nodes, allowing them to cooperate. Speaking of AI, it is also expected to increasingly pervade the operation of the network itself, moving down from high level control closer to signal processing at the physical layer.

    Setting up standards: why do we need them?

    It sounds so 20th century but there are very good reasons, the main one being mobility. In mobile communications we need well defined interfaces so network elements speak and understand the same language. Phones move around and they have to be able to connect to different networks. Within a network, components from different vendors have to work together. Standards define the interfaces to make it all work together, and they do much more, including setting the minimum performance requirements for phones and base stations. In practice, companies spend a lot of money and effort on interoperability testing to ensure their equipment plays well with others.

    Three main ingredients to 6G success (or failure)

    3GPP

    In the mobile industry, the main standards body is 3GPP, which issues releases about every 18 months. A release fully defines a set of specifications that can be used to develop products. For example, Release 15 (2018) provided the first specifications for 5G, primarily covering the signaling and data channels to support improved mobile broadband. One particularly useful feature is the so-called flexible numerology, which enables the same structure to be adapted for use over a wide range of frequency bands. Release 16 (2020) added several features, including unlicensed spectrum operation and industrial IoT. Release 17 currently under construction will include operation at higher frequencies, more IoT features and satellite networks. From where we stand today, we expect the first release with 6G specifications to be around 2028.

    3GPP standards enable mobile networks to flourish globally, making it possible to recoup the enormous R&D investments. Since the advent of 4G, there has been a single effective standard worldwide. Earlier, there were two dominant factions developing the CDMA and GSM families of standards. This split probably led to the failure of several large companies. In our industry, fragmentation is the F-word. I will revisit this in the context of current geopolitics.

    Spectrum

    Until recently, all mobile spectrum was in the low band (below 3 GHz), which has become jam-packed not only with mobile but many other services. The psychedelic colored spectrum map gives you a feel for it. With 5G, the floodgates have opened, with new spectrum becoming available in mid band (roughly 3 to 7 GHz) and high band (24 to 52 GHz). These higher bands are great because it’s possible to operate with wider bandwidths (in 100’s of MHz compared to 10-20 MHz in low band) and support higher rate services. But propagation characteristics in higher bands make for challenging deployment, as signals don’t travel well through walls etc. Moving into even higher bands in the 100’s of GHz will exacerbate this problem. Also, spectrum used by legacy systems will get gradually re-farmed for use by new networks. In addition, there is a push led by the FCC (Federal Communications Commission) to mandate spectrum sharing between networks and incumbent users such as radar as a way to accelerate spectrum availability. The CBRS band at 3.55 GHz is the leading example of this type of policy. Keep in mind that spectrum is our lifeline and we’ll take it and make the best of it wherever and however it’s available.

    Geopolitics

    The “trade is good” principle that has dominated government policies since the fall of the Soviet Union seems to be on its way out, being replaced by more nationally centered policies. In this context there is now keen awareness of the rise of China as a serious technological rival to the US and its allies. This has manifested itself to a full extent in telecom with all the recent attention on 5G and mobile networks as a strategic national asset.

    There is wide support in congress for big spending on technology R&D, including 6G, evidenced by several proposals under discussion around the National Science Foundation (NSF) alone. Their common thread is a multifold budget expansion and an increased emphasis on technology transfer. 

    In the private sector, the Alliance for Telecommunications Industry Solutions (ATIS) which represents the interests of the telecom industry in North America has launched the Next G Alliance to develop a roadmap towards 6G and lobby the government to influence policy and secure funding for R&D.

    This is all good on the national scale, but it may come back to bite us with standards fragmentation and the threat of losing the global market scale. Navigating this complicated landscape will be challenging and it will be fascinating to me to see how it all plays out over the coming years.

    Additional Resources:

  • 25 Nov 2020 6:45 AM | Anonymous

    This is the first part of a series on Executive Coaching and Leadership Development for professionals.

    Executive coaching has exploded in popularity in the last decade and today benefits from an army of passionate advocates that not only including the coaches but also the participants that have personally benefited from coaching and their organizational sponsors who witnessed its transformational power firsthand.

    Between 25 and 40 percent of Fortune 500 companies use executive coaches, according to the Hay Group (acquired by Korn-Ferry), a major human-resources consultancy. Lee, Hecht, Harrison, the world’s leading career management firm, derives a full 20 percent of its revenues from executive coaching. Manchester, Inc., a similar national firm, finds that about six out of ten organizations currently offer coaching or other developmental counseling to their managers and executives. Another 20 percent of companies plan to offer coaching within the next year. Today, Cisco, Google, Uber, Facebook, among others have created departments of internal coaching and hired some of the brightest executive coaching minds.

    There are many definitions of executive coaching, but two of the most straightforward definitions that we prefer to use are, “a relationship in which a client engages with a coach in order to facilitate his or her becoming a more effective leader” (Ely et. Al) and “the facilitation of learning and development with the purpose of enhancing effective action, goals achievement, and personal satisfaction.”

    While these definitions provide a broad description of its intended purpose, the following criteria are used to more strictly define executive coaching:

    1. One-on-one interaction between an executive coach and the client – as opposed to team coaching, team building, group training, or group consulting. Coaches and clients usually interact through live sessions, weekly or bi-weekly for 60 to 90 minutes.
    2. Methodology based – drawing on specific tools, methods, and techniques that promote the client’s agenda to uncover their own blind spots, identify their challenges, and develop their own goals.
    3. Structured conversations led by a trained professional – as opposed to more traditional mentorship that takes place between managers, HR professionals, and peers These conversations focus on identifying and strengthening the relationship between the client’s own development and requirements of the business. As the complexity of the business increases, and the expectations on leaders increase, they found themselves needing to develop new skills and behaviors while eliminating self-inhibitors.
    4. Task-oriented – Executive coaching involves important stakeholders beyond the client and the coach; the goals and future outcomes for organization are central to the process. By using a sequence of explorations and small goal-achievements, the coach helps the client take action constantly in small increments to create long-lasting behavioral changes and results for both the client and the organization.
    5. Long-term Impact – intended to enhance the person’s ability to learn and develop new skills independently. The model focuses on developing the client’s capacity, knowledge, motivation, insights, and emotional intelligence maturity in order to effect long-term benefits.
    There are also many areas of expertise in which executive coaches can support clients:
    1. Business Acumen – focus on a deep understanding of best business practices and strategies, management principles and behaviors, financial models, business models and plans, and startup life cycles. While business consultants are hired to provide business relevant answers, executive coaches with business acumen guide the clients to define their own challenges, and develop their own solutions that align with their career and organizational goals.
    2. Organizational Knowledge – focus on design, structure, power and authority, alignment, culture, leadership models, company goals achievement and leadership development. Complexities of organizational models are very invisible to the untrained eye, or for coaches with no prior relevant personal experience.
    3. Coaching knowledge – focus on coaching methodologies, competencies, practices, assessment, personal goals achievement, as well as being students of lifelong learning and behavioral improvement. While there are many leaders providing coaching to their peers and teams, the work of professional executive coaches within organizations involves unleashing the human spirit and expanding people’s capacity to stretch and grow beyond self-limiting boundaries.
    “Executive coaches are not for the meek. They’re for people who value unambiguous feedback. If coaches have one thing in common, it’s that they are ruthlessly results-oriented. ”, according to an article according to Fast Company Magazine. This quote defines the major boundary between executive coaching and the unstructured other areas such as advising, consulting, or peer mentoring.

    In the next part of this series, we will explore the challenges and learnings on how to become a rock-star leader.

    Main image via Pexels.

    As an Executive Coach, Elie Habib guides CEOs, entrepreneurs, and senior executives toward performance excellence and acceleration of their career aspirations.

    He serves as a thought partner in guiding leaders to address their most complex leadership challenges.

    Elie is CEO of MotivaimCoach, Lebnet co-founder, Investment Committee member of MEVP’s Impact Fund (Lebanon), and prior corporate executive and CEO/founder.

  • 25 Nov 2020 6:41 AM | Anonymous

    This article is part of an expert series written by industry experts. In this part, Nadim Maluf, the CEO of Qnovo Inc, discussed the impact of the electrical grid and lithium-ion batteries breakthrough.

    The Royal Swedish Academy of Sciences awarded on 9 October 2019 the Nobel Prize in Chemistry to three scientists for “the development of the lithium-ion battery.” It was a long overdue recognition for John Goodenough, Stanley Wittingham and Akira Yoshino, and for the thousands of engineers and scientists who have made rechargeable batteries a pillar of a mobile society.

    Any person around the globe can associate lithium-ion batteries as the main power source in their smartphones or laptop computers, and increasingly, in new generations of electric vehicles. If you drive one, like a Tesla, you are quite fluent about its capabilities and limitations. Yet, few recognize how central lithium-ion batteries have become to our global economies — and the extent to which the “green revolution” relies on energy storage and battery systems. The purpose of this article is to shed some light on the underlying technologies and applications, both present and future.

    In many respects, a lithium-ion battery is a simple device. It has two electrical terminals: positive and negative. Yet, in many other respects, it is complex or evokes a sense of complexity because it involves “chemistry,” a topic of inimical memories to many college graduates.

    The basic structure of a Lithium-Ion battery

    In its most basic form, a lithium-ion battery consists of three sandwiched layers rolled together inside a package: an anode, a cathode, and a porous separator in between. During charging, lithium ions travel from the cathode to the anode through the pores of the separator. The opposite occurs during discharging.

    The battery inside your smartphone looks very much like the one described above. The battery inside an electric vehicle consists of hundreds — or in some cases thousands — of individual batteries (called cells) electrically connected together to provide more electrical charge and energy.

    Stored energy determines the life of the battery, i.e., the duration of time the energy may be available to a user. The basic unit of energy is the watt-hour, or W.h. The energy capacity of a small smartphone battery is about 15 W.h., sufficient to power a device for a day. That of an electric vehicle is nearly 100,000 W.h, often written as 100 kWh. This amount is sufficient for a driving range of approximately 500 km – or 5 hours at highway speeds. Batteries intended for the electric grid store a far larger amount of energy, typically several million W.h, or MW.h.

    The number of times the battery can be charged and discharged is called “cycle life.” In principle, charge-discharge cycling should occur indefinitely but degradation of structural materials within the battery limit its lifespan to less than 1,000 cycles. That works well for most applications.

    Charge time is another measure of importance, especially for consumer devices and electric vehicles.

    As the ancient saying goes, there is no such thing as a free lunch. Stored energy, cycle life and charge time are all inter-related. For example, repeated fast charging may accelerate battery damage hence shortening its lifespan (or cycle life). Such complex interactions force manufacturers to optimize the design of the battery to its intended application.

    The success of lithium-ion batteries in modern times is largely due to their favorable economics. The cost of batteries plummeted in this decade from US $1,000 US per kWh to nearly $100 per kWh. Forecasters predict that electric vehicles will reach cost parity with traditional combustion-engine cars by 2024. Combined with government regulations on greenhouse gas (GHG) emissions, it is inexorably transforming the automotive and transportation industries.

    Beyond consumer devices and electric vehicles, electric utilities are exploring the use of large-scale lithium-ion batteries for their grids. Many are familiar with pairing batteries to residential solar panel installations for the purpose of going off-grid. The reality is that such an application is limited in appeal to affluent suburban or rural areas; dense urban geographies will remain dependent for the foreseeable future on electric utility companies.

    Several utilities around the globe are piloting the use of lithium-ion batteries to offset a timing imbalance, dubbed the “duck curve,” between electric power demand and renewable energy production. Solar power peaks in the afternoon hours causing traditional fossil fuel power plants, namely gas-powered turbines, to throttle down their production. Yet these turbines need to ramp up rapidly again in the evening to make up for rising power demand after the sun sets. This steep decline in traditional power generation in the afternoon followed by a rapid ramp in the evening causes significant stress on the grid and worse greenhouse gas emissions.

    Enter lithium-ion batteries. They soak up the excess solar energy generated during daylight and then deliver it after the sun goes down. The result is a flatter power generation profile for traditional fossil fuel power plants with improved operating efficiencies, lower GHG emissions and better economics.

    The California Energy Commission approved in 2018 a mandate to install solar panels on all new single-family homes constructed after 2020. Guaranteeing a steady rise in future use of solar energy, batteries become a critical component in integrating renewable sources of energy with the traditional grid.

    Duck Curve: Timing imbalance between peak demand and renewable energy production in California.
    (Source: California Independent System Operator CAL ISO)

    Traditional grids historically consisted of large power production plants in distant locations and extensive transmission grid lines to transport the power to large urban areas.

    Power plants adjusted their energy outputs to match the exact demand at that moment in time. Future grids will evolve to more distributed designs integrating renewable energy sources (e.g., solar, wind) in proximity to or within urban boundaries, with energy storage systems (to store energy when it is generated and releasing it when it is needed).

    California leads the nation in energy storage with 4,200 MW of installed capacity — enough to power nearly 1 million households. California Senate Bill SB100 mandates that the state receives all its energy from carbon-neutral sources by 2045. Both the state legislature and the California Public Utilities Commission (CPUC) have imposed specific energy storage targets for investor-owned utilities operating across the state.

    Looking out to the next decade, energy storage and batteries will become central to global energy and transportation policies. It is no surprise that forecasters estimate the market for lithium-ion batteries to be in excess of $300 Billion by 2030.

    Main Image via Pexels


LebNet, a non-profit organization, serves as a multi-faceted platform for Lebanese professionals residing in the US and Canada, entrepreneurs, investors, business partners in a broad technology eco-system, and acts as a bridge to their counterparts in Lebanon and the rest of the Middle East

© 2021 LEBNET. ALL RIGHTS RESERVED

205 De Anza Blvd., #315, San Mateo, CA 94402, USA. +1.650.539.3536

Powered by Wild Apricot Membership Software