Magnetism in metastable and annealed compositionally complex alloys

Compositionally complex materials (CCMs) present a potential paradigm shift in the design of magnetic materials. These alloys exhibit long-range structural order coupled with limited or no chemical order. As a result, extreme local environments exist with a large variations in the magnetic energy terms, which can manifest large changes in the magnetic behavior. In the current work, the magnetic properties of (Cr, Mn, Fe, Ni) alloys are presented. These materials were prepared by room-temperature combinatorial sputtering, resulting in a range of compositions with a single bcc structural phase and no chemical ordering. The combinatorial growth technique allows CCMs to be prepared outside of their thermodynamically stable phase, enabling the exploration of otherwise inaccessible order. The mixed ferromagnetic and antiferromagnetic interactions in these alloys causes frustrated magnetic behavior, which results in an extremely low coercivity (<1mT), which increases rapidly at 50 K. At low temperatures, the coercivity achieves values of nearly 500 mT, which is comparable to some high-anisotropy magnetic materials. Commensurate with the divergent coercivity is an atypical drop in the temperature dependent magnetization. These effects are explained by a mixed magnetic phase model, consisting of ferro-, antiferro-, and frustrated magnetic regions, and are rationalized by simulations. A machine-learning algorithm is employed to visualize the parameter space and inform the development of subsequent compositions. Annealing the samples at 600 °C orders the sample, more-than doubling the Curie temperature and increasing the saturation magnetization by as much as 5×. Simultaneously, the large coercivities are suppressed, resulting in magnetic behavior that is largely temperature independent over a range of 350 K. The ability to transform from a hard magnet to a soft magnet over a narrow temperature range makes these materials promising for heat-assisted recording technologies.

The wireless network jamming problem subject to protocol interference using directional antennas and with battery capacity constraints

Wireless networks support the operation and maintenance of a variety of critical infrastructure, and keeping these networks functional in the face of adversarial adversity is a paramount concern of infrastructure managers. Supporting these networks’ continued operability requires a robust understanding of wireless-network functionality, including of the ways in which adversaries may seek to jam such networks using recently developed capabilities. However, past work on wireless network jamming subject to protocol interference has focused on using omnidirectional antennas for the target and the jamming attack nodes and has not considered battery-capacity impacts on the success of these jamming efforts. Based on a field test of an ad hoc network performed by Ramanathan et al. (2005) in which the authors found that directional antennas offer an “order-of-magnitude improvement in the capacity and connectivity of an ad hoc network,” the work in this field should be extended to include directional antennas. By incorporating directional antennas, analysts may more realistically model antennas present in everyday use. In addition, battery capacity of the wireless network nodes can impact the effectiveness of a jamming attack and should be considered. By considering battery capacity, researchers are sure to take into account real-world scenarios in which energy limitations might affect actual network performance. The mathematical model discussed in this paper demonstrates the way in which network jamming is affected by directional antennas, battery capacity, and node density to determine how these factors would impact a robust jamming attack. Particularly noteworthy results include the finding that high battery capacity can offer as much as half an order of magnitude of improvement in data transmission over lower battery capacity in certain cases. These results show that the model could be used to aid decision makers in understanding how to design a network that is robust against jamming attacks.

A stochastic programming model with endogenous uncertainty for proactive supplier risk mitigation of low-volume-high-value manufacturers considering decision-dependent supplier performance

Poor supplier performance can result in delays that disrupt manufacturing operations. By proactively managing supplier performance, the likelihood and severity of supplier risk can be minimized. In this paper, we study the problem of selecting optimal supplier development programs (SDPs) to improve suppliers’ performance with a limited budget to proactively reduce supplier risks for a manufacturer. A key feature of our research is that it incorporates the uncertainty in supplier performance in response to SDPs selection decisions. This uncertainty is endogenous (decision-dependent), as the probability of supplier performance depends on the selection of SDPs, which introduces modeling and algorithmic challenges. We formulate this problem as a two-stage stochastic program with decision-dependent uncertainty. We implement a sample-based greedy algorithm and an accelerated Benders’ decomposition method to solve the developed model. We evaluate our methodology using the numerical cases of four low-volume, high-value manufacturing firms. The results provide insights into the effects of the budget amount and of the number of SDPs on the firm’s expected profit. Numerical experiments demonstrate that an increase in budget results in profit growth, e.g., 5.09% profit growth for one firm. At a lower budget level, increasing the number of available SDPs results in more profit growth. The results also demonstrate the significance of considering uncertainty in supplier performance and considering multiple supplier risks for the firm. In addition, computational experiments demonstrate that our algorithms, especially our greedy approximation algorithm, can solve large-sized problems in a reasonable time.

Risk-averse Bi-level Stochastic Network Interdiction Model for Cyber-security

This paper proposes a methodology to enable a risk-averse, resource constrained cyber network defender to optimally deploy security countermeasures that protect against potential attackers with an uncertain budget. The proposed methodology is based on a risk-averse bi-level stochastic network interdiction model on an attack graph–maps the potential attack paths of a cyber network–that minimizes the weighted sum of the expected maximum loss over all attack scenarios and the risk of substantially large losses. The conditional-value-at-risk measure is incorporated into the stochastic programming model to reduce the risk of substantially large losses. An exact algorithm is developed to solve the model as well as several acceleration techniques to improve the computational efficiency. Numerical experiments demonstrate that the acceleration techniques enable the solution of relatively large problems within a reasonable amount of time: simultaneously applying all the acceleration techniques reduces the average computation time of the basic algorithm by 71% for 100-node graphs. Using metrics called mean-risk value of stochastic solution and value of risk-aversion, computational results suggest that the stochastic risk-averse model provides substantially better network interdiction decision than the deterministic (ignores uncertainty) and risk-neutral models when 1) the distribution of attacker budget is heavy-right-tailed and 2) the defender is highly risk-averse.

Bhuiyan, Tanveer Hossain, Hugh R. Medal, Apurba K. Nandi, and Mahantesh Halappanavar.

Atomistic modeling of meso-timescale processes with SEAKMC: A perspective and recent developments

On-the-fly kinetic Monte Carlo (kMC) methods have recently garnered significant attentions after successful applications to various atomic-scale problems using a timescale outside the reach of classical molecular dynamics. These methods play a critical role in modeling atomistic meso-timescale processes, and it is therefore essential to further improve their capabilities. Herein, we review one of the on-the-fly kMC methods, Self-Evolving Atomistic kinetic Monte Carlo (SEAKMC) and propose two schemes that considerably enhance the efficiency of saddle point searches (SPSs) during the simulations. The performance of these schemes is tested using the diffusion of point defects in bcc Fe. In addition, we discuss approaches to significantly mitigate limitations of these schemes, which further improves their efficiencies. Importantly, these schemes improve the SPS efficiency not only for SEAKMC but also for other on-the-fly kMC methods, broadening the applications of on-the-fly kMC simulations to complex meso-timescale problems.

Identifying and mitigating supply chain risks using fault tree optimization

Although supply chain risk management and supply chain reliability are topics that have been studied extensively, a gap exists for solutions that take a systems approach to quantitative risk mitigation decision making and especially in industries that present unique risks. In practice, supply chain risk mitigation decisions are made in silos and are reactionary. In this article, we address these gaps by representing a supply chain as a system using a fault tree based on the bill of materials of the product being sourced. Viewing the supply chain as a system provides the basis to develop an approach that considers all suppliers within the supply chain as a portfolio of potential risks to be managed. Next, we propose a set of mathematical models to proactively and quantitatively identify and mitigate at-risk suppliers using enterprise available data with consideration for a firm’s budgetary constraints. Two approaches are investigated and demonstrated on actual problems experienced in industry. The examples presented focus on Low-Volume High-Value (LVHV) supply chains that are characterized by long lead times and a limited number of capable suppliers, which make them especially susceptible to disruption events that may cause delays in delivered products and subsequently increase the financial risk exposure of the firm. Although LVHV supply chains are used to demonstrate the methodology, the approach is applicable to other types of supply chains as well. Results are presented as a Pareto frontier and demonstrate the practical application of the methodology.

A stochastic programming model with endogenous and exogenous uncertainty for reliable network design under random disruption

Designing and maintaining a reliable and efficient transportation network is an important industrial problem. Integrating infrastructure protection with the network design model is efficient as these models provide strategic decisions to make a transportation network simultaneously efficient and reliable. We studied a combined network design and infrastructure protection problem subject to random disruptions where the protection is imperfect and multi-level and the effect of disruption is imperfect. In this research, we modeled a resource-constrained decision maker seeking to optimally allocate protection resources to the facilities, and construct links in the network to minimize the expected post-disruption transportation cost (PDTC). We modeled the problem as a two-stage stochastic program with both endogenous and exogenous uncertainty: a facility’s post-disruption capacity depends probabilistically on the protection decision, making the uncertainty endogenous, while the link construction decision directly affects the transportation decision. We implemented an accelerated L-shaped algorithm to solve the model and predictive modeling techniques to estimate the probability of a facility’s post-disruption capacity for a given protection and disruption intensity. Numerical results show that solution quality is sensitive to the number of protection levels modeled; average reduction in the expected PDTC is 18.7% as the number of protection levels increases from 2 to 5. Results demonstrate that the mean value model performs very poorly as the uncertainty increases. Results also indicate that the stochastic programming model is sensitive to the estimation error of the predictive modeling techniques; on average the expected PDTC becomes 6.38% higher for using the least accurate prediction model.

Transportation-Research

Connected Infrastructure Network Design Under Additive Service Utilities

An infrastructure system usually contains a number of inter-connected infrastructure links that connect users to services or products. Where to locate these infrastructure links is a challenging problem that largely determines the efficiency and quality of the network. This paper studies a new location design problem that aims to maximize the total weighted benefits between users and multiple services that are measured by the amount of connectivity between users and links in the network. This problem is investigated from both analytical and computational points of view. First, analytical properties of special cases of the problem are described. Next, two integer programming model formulations are presented for the general problem. We also test intuitive heuristics including greedy and interchange algorithms, and find that the interchange algorithm efficiently yields near-optimum solutions. Finally, a set of numerical examples demonstrate the proposed models and reveal interesting managerial insights. In particular, we found that a more distance-dependent utility measure and a higher concentration of users help achieve a better total utility. As the population becomes increasingly concentrated, the optimal link design evolves from a linear path to a cluster of links around the population center. As the budget level increases, the installed links gradually sprawl from the population center towards the periphery, and in the case of multiple population centers, they grow and eventually merge into one connected component.

A model‐based systems engineering approach to critical infrastructure vulnerability assessment and decision analysis

A Model-Based Systems Engineering Approach to Critical Infrastructure Vulnerability Assessment and Decision Analysis

Securing critical infrastructure against attack presents significant challenges. As new infrastructure is built and existing infrastructure is maintained, a method to assess the vulnerabilities and support decision makers in determining the best use of security resources is needed. In response to this need, this research develops a methodology for performing vulnerability assessment and decision analysis of critical infrastructure using model‐based systems engineering, an approach that has not been applied to this problem. The approach presented allows architects to link regulatory requirements, system architecture, subject matter expert opinion and attack vectors to a Department of Defense Architecture Framework (DoDAF)‐based model that allows decision makers to evaluate system vulnerability and determine alternatives to securing their systems based on their budget constraints. The decision analysis is done using an integer linear program that is integrated with DoDAF to provide solutions for how to allocate scarce security resources. Securing an electrical substation is used as an illustrative case study to demonstrate the methodology. The case study shows that the method presented here can be used to answer key questions, for example, what security resources should a decision maker invest in based on their budget constraints? Results show that the modeling and analysis approach provides a means to effectively evaluate the infrastructure vulnerability and presents a set of security alternatives for decision makers to choose from, based on their vulnerabilities and budget profile.