Link prediction is one of the most productive branches in network science, aiming to predict links that would have existed but have not yet been observed, or links that will appear during the evolution of the network. Over nearly two decades, the field of link prediction has amassed a substantial body of research, encompassing a plethora of algorithms and diverse applications. For any algorithm, one or more evaluation metrics are required to assess its performance. Because using different evaluation metrics can provide different assessments of the algorithm performance, how to select appropriate evaluation metrics is a fundamental issue in link prediction. To address this issue, we propose a novel measure that quantifiers the discriminability of any evaluation metric given a real network and an algorithm. Based on 131 real networks and 20 representative algorithms, we systematically compare the discriminabilities of eight evaluation metrics, and demonstrate that H-measure and Area Under the ROC Curve (AUC) exhibit the strongest discriminabilities, followed by Normalized Discounted Cumulative Gain (NDCG). Our finding is robust for networks in different domains and algorithms of different types. This study provides insights into the selection of evaluation metrics, which may further contribute to standardizing the evaluating process of link prediction algorithms.
The mechanisms by which media inhomogeneity affects the three wave parametric instability (PI), including the wave number mismatch and the parameter gradients, are investigated using an approach based on the Wentzel-Kramers-Brillouin-Jeffreys (WKBJ) approximation. This approach transforms the coupling wave equations into an amplitude equation and iteratively solves its characteristic polynomials. By analyzing the solutions, we proposed that the wave number of the quasi-mode, a key term in the wave number mismatch of non-resonant type PI, should be a complex root of the quasi-mode's linear dispersion equation. Based on this, we derive a unified amplification factor formula that covers the resonant and non-resonant, the forward-scattered and backward-scattered types of PI. The impact of parameter gradients on the local spatial growth rate becomes significant when the inhomogeneity exceeds 10^-3. Considering parameter gradients extends our approach's validity to an inhomogeneity of about 10^-2. This approach holds promise for more specific PI modeling in the future.
Link prediction has become a critical problem in network science and has thus attracted increasing research interest. Popularity and similarity are two primary mechanisms in the formation of real networks. However, the roles of popularity and similarity mechanisms in link prediction across various domain networks remain poorly understood. Accordingly, this study used orbit degrees of graphlets to construct multi-order popularity- and similarity-based network link predictors, demonstrating that traditional popularity- and similarity-based indices can be efficiently represented in terms of orbit degrees. Moreover, we designed a supervised learning model that fuses multiple orbit-degree-based features and validated its link prediction performance. We also evaluated the mean absolute Shapley additive explanations of each feature within this model across 550 real-world networks from six domains. We observed that the homophily mechanism, which is a similarity-based feature, dominated social networks, with its win rate being 91\%. Moreover, a different similarity-based feature was prominent in economic, technological, and information networks. Finally, no single feature dominated the biological and transportation networks. The proposed approach improves the accuracy and interpretability of link prediction, thus facilitating the analysis of complex networks.
In this work, we study the nucleation of quasicrystals from liquid or periodic crystals by developing an efficient order-order phase transition algorithm, namely the nullspace-preserving saddle search method. Specifically, we focus on nucleation and phase transitions of the decagonal quasicrystal (DQC) based on the Lifshitz-Petrich model. We present the nucleation path of DQC from the liquid and demonstrate one- and two-stage transition paths between DQC and periodic crystals. We provide a perspective of the group-subgroup phase transition and nucleation rates to understand the nucleation and phase transition mechanisms involving DQC. These results reveal the one-step and stepwise modes of symmetry breaking or recovery in the phase transition from DQC, where the stepwise modes are more probable.
Sebastian Strempfer, Zichao Wendy Di, Kazutomo Yoshii, Yue Cao, Qingteng Zhang, Eric M. Dufresne, Mathew Cherukara, Suresh Narayanan, Martin V. Holt, Antonino Miceli, Tao Zhou The construction of highly coherent x-ray sources has enabled new research opportunities across the scientific landscape. The maximum raw data rate per beamline now exceeds 40 GB/s, posing unprecedented challenges for the online processing and offline storage of the big data. Such challenge is particularly prominent for x-ray photon correlation spectroscopy (XPCS), where real time analyses require simultaneous calculation on all the previously acquired data in the time series. We present a homomorphic compression scheme to effectively reduce the computational time and memory space required for XPCS analysis. Leveraging similarities in the mathematical expression between a matrix-based compression algorithm and the correlation calculation, our approach allows direct operation on the compressed data without their decompression. The lossy compression reduces the computational time by a factor of 10,000, enabling real time calculation of the correlation functions at kHz framerate. Our demonstration of a homomorphic compression of scientific data provides an effective solution to the big data challenge at coherent light sources. Beyond the example shown in this work, the framework can be extended to facilitate real-time operations directly on a compressed data stream for other techniques.
The precise control of liquid-liquid phase separation (LLPS) is the key to developing cutting-edge technologies that benefit diverse disciplines. Fluid flow was found to be capable of controlling the structure and effective temperature of LLPS, but the extent and precision of control were less than optimal. In this article, we propose that patterned flow can be employed as a generic tool to manipulate LLPS effectively. By combining theoretical modeling and numerical simulations, we demonstrate that flows with tailor-made structures can become functional, allowing us to control diverse aspects of LLPS. Typical examples include the capture and pinning of droplets, fine-tuning of droplet sizes, forced assembly of periodic droplet arrays, and the remodeling of the kinetics and structure of phase separation. These manipulations are grounded on the redistribution of chemical potential by the structured flow. Our results not only can lead to potential LLPS-based technologies, but also highlight the rich behavior of LLPS introduced by the patterned flow.
In the digital era, data has become a pivotal asset, advancing technologies such as autonomous driving. Despite this, data trading faces challenges like the absence of robust pricing methods and the lack of trustworthy trading mechanisms. To address these challenges, we introduce a traffic-oriented data trading platform named Data on The Move (DTM), integrating traffic simulation, data trading, and Artificial Intelligent (AI) agents. The DTM platform supports evident-based data value evaluation and AI-based trading mechanisms. Leveraging the common sense capabilities of Large Language Models (LLMs) to assess traffic state and data value, DTM can determine reasonable traffic data pricing through multi-round interaction and simulations. Moreover, DTM provides a pricing method validation by simulating traffic systems, multi-agent interactions, and the heterogeneity and irrational behaviors of individuals in the trading market. Within the DTM platform, entities such as connected vehicles and traffic light controllers could engage in information collecting, data pricing, trading, and decision-making. Simulation results demonstrate that our proposed AI agent-based pricing approach enhances data trading by offering rational prices, as evidenced by the observed improvement in traffic efficiency. This underscores the effectiveness and practical value of DTM, offering new perspectives for the evolution of data markets and smart cities. To the best of our knowledge, this is the first study employing LLMs in data pricing and a pioneering data trading practice in the field of intelligent vehicles and smart cities.
Marc Zajac, Tao Zhou, Tiannan Yang, Sujit Das, Yue Cao, Burak Guzelturk, Vladimir Stoica, Mathew Cherukara, John W. Freeland, Venkatraman Gopalan, Ramamoorthy Ramesh, Lane W. Martin, Long-Qing Chen, Martin Holt, Stephan Hruszkewycz, Haidan Wen Adaptive networks can sense and adjust to dynamic environments to optimize their performance. Understanding their nanoscale responses to external stimuli is essential for applications in nanodevices and neuromorphic computing. However, it is challenging to image such responses on the nanoscale with crystallographic sensitivity. Here, the evolution of nanodomain networks in (PbTiO3)n/(SrTiO3)n superlattices was directly visualized in real space as the system adapts to ultrafast repetitive optical excitations that emulate controlled neural inputs. The adaptive response allows the system to explore a wealth of metastable states that were previously inaccessible. Their reconfiguration and competition were quantitatively measured by scanning x-ray nanodiffraction as a function of the number of applied pulses, in which crystallographic characteristics were quantitatively assessed by assorted diffraction patterns using unsupervised machine-learning methods. The corresponding domain boundaries and their connectivity were drastically altered by light, holding promise for light-programmable nanocircuits in analogy to neuroplasticity. Phase-field simulations elucidate that the reconfiguration of the domain networks is a result of the interplay between photocarriers and transient lattice temperature. The demonstrated optical control scheme and the uncovered nanoscopic insights open opportunities for remote control of adaptive nanoscale domain networks.
Understanding how student peers influence learning outcomes is crucial for effective education management in complex social systems. The complexities of peer selection and evolving peer relationships, however, pose challenges for identifying peer effects using static observational data. Here we use both null-model and regression approaches to examine peer effects using longitudinal data from 5,272 undergraduates, where roommate assignments are plausibly random upon enrollment and roommate relationships persist until graduation. Specifically, we construct a roommate null model by randomly shuffling students among dorm rooms and introduce an assimilation metric to quantify similarities in roommate academic performance. We find significantly larger assimilation in actual data than in the roommate null model, suggesting roommate peer effects, whereby roommates have more similar performance than expected by chance alone. Moreover, assimilation exhibits an overall increasing trend over time, suggesting that peer effects become stronger the longer roommates live together. Our regression analysis further reveals the moderating role of peer heterogeneity. In particular, when roommates perform similarly, the positive relationship between a student's future performance and their roommates' average prior performance is more pronounced, and their ordinal rank in the dorm room has an independent effect. Our findings contribute to understanding the role of college roommates in influencing student academic performance.
Scanning X-ray nanodiffraction microscopy is a powerful technique for spatially resolving nanoscale structural morphologies by diffraction contrast. One of the critical challenges in experimental nanodiffraction data analysis is posed by the convergence angle of nanoscale focusing optics which creates simultaneous dependency of the far-field scattering data on three independent components of the local strain tensor - corresponding to dilation and two potential rigid body rotations of the unit cell. All three components are in principle resolvable through a spatially mapped sample tilt series however traditional data analysis is computationally expensive and prone to artifacts. In this study, we implement NanobeamNN, a convolutional neural network specifically tailored to the analysis of scanning probe X-ray microscopy data. NanobeamNN learns lattice strain and rotation angles from simulated diffraction of a focused X-ray nanobeam by an epitaxial thin film and can directly make reasonable predictions on experimental data without the need for additional fine-tuning. We demonstrate that this approach represents a significant advancement in computational speed over conventional methods, as well as a potential improvement in accuracy over the current standard.
Ptychography is a powerful imaging technique that is used in a variety of fields, including materials science, biology, and nanotechnology. However, the accuracy of the reconstructed ptychography image is highly dependent on the accuracy of the recorded probe positions which often contain errors. These errors are typically corrected jointly with phase retrieval through numerical optimization approaches. When the error accumulates along the scan path or when the error magnitude is large, these approaches may not converge with satisfactory result. We propose a fundamentally new approach for ptychography probe position prediction for data with large position errors, where a neural network is used to make single-shot phase retrieval on individual diffraction patterns, yielding the object image at each scan point. The pairwise offsets among these images are then found using a robust image registration method, and the results are combined to yield the complete scan path by constructing and solving a linear equation. We show that our method can achieve good position prediction accuracy for data with large and accumulating errors on the order of $10^2$ pixels, a magnitude that often makes optimization-based algorithms fail to converge. For ptychography instruments without sophisticated position control equipment such as interferometers, our method is of significant practical potential.
Surface acoustic wave devices are key components for processing radio frequency signals in wireless communication because these devices offer simultaneously high performance, compact size and low cost. The optimization of the device structure requires a quantitative understanding of energy conversion and loss mechanisms. Stroboscopic full-field diffraction x-ray microscopy studies of a prototypical one-port resonator device revealed the existence of unanticipated acoustic loss. A non-uniform acoustic excitation in the active area was responsible for the substantial end and side leakages observed at the design frequency. Quantitative analysis of the strain amplitude using a wave decomposition method allowed the determination of several key device parameters. This high-resolution spatiotemporal strain imaging technique is, more generally, suited for studying nanophononics, specifically when the feature size is smaller than optical wavelengths. The strain sensitivity allows precise measurement of acoustic waves with picometer-scale amplitude.
The evolution processes of complex systems carry key information in the systems' functional properties. Applying machine learning algorithms, we demonstrate that the historical formation process of various networked complex systems can be extracted, including protein-protein interaction, ecology, and social network systems. The recovered evolution process has demonstrations of immense scientific values, such as interpreting the evolution of protein-protein interaction network, facilitating structure prediction, and particularly revealing the key co-evolution features of network structures such as preferential attachment, community structure, local clustering, degree-degree correlation that could not be explained collectively by previous theories. Intriguingly, we discover that for large networks, if the performance of the machine learning model is slightly better than a random guess on the pairwise order of links, reliable restoration of the overall network formation process can be achieved. This suggests that evolution history restoration is generally highly feasible on empirical networks.
Link prediction is a paradigmatic and challenging problem in network science, which aims to predict missing links, future links and temporal links based on known topology. Along with the increasing number of link prediction algorithms, a critical yet previously ignored risk is that the evaluation metrics for algorithm performance are usually chosen at will. This paper implements extensive experiments on hundreds of real networks and 25 well-known algorithms, revealing significant inconsistency among evaluation metrics, namely different metrics probably produce remarkably different rankings of algorithms. Therefore, we conclude that any single metric cannot comprehensively or credibly evaluate algorithm performance. Further analysis suggests the usage of at least two metrics: one is the area under the receiver operating characteristic curve (AUC), and the other is one of the following three candidates, say the area under the precision-recall curve (AUPR), the area under the precision curve (AUC-Precision), and the normalized discounted cumulative gain (NDCG). In addition, as we have proved the essential equivalence of threshold-dependent metrics, if in a link prediction task, some specific thresholds are meaningful, we can consider any one threshold-dependent metric with those thresholds. This work completes a missing part in the landscape of link prediction, and provides a starting point toward a well-accepted criterion or standard to select proper evaluation metrics for link prediction.
Link prediction aims to predict the potential existence of links between two unconnected nodes within a network based on the known topological characteristics. Evaluation metrics are used to assess the effectiveness of algorithms in link prediction. The discriminating ability of these evaluation metrics is vitally important for accurately evaluating link prediction algorithms. In this study, we propose an artificial network model, based on which one can adjust a single parameter to monotonically and continuously turn the prediction accuracy of the specifically designed link prediction algorithm. Building upon this foundation, we show a framework to depict the effectiveness of evaluating metrics by focusing on their discriminating ability. Specifically, a quantitative comparison in the abilities of correctly discerning varying prediction accuracies was conducted encompassing nine evaluation metrics: Precision, Recall, F1-Measure, Matthews Correlation Coefficient (MCC), Balanced Precision (BP), the Area Under the receiver operating characteristic Curve (AUC), the Area Under the Precision-Recall curve (AUPR), Normalized Discounted Cumulative Gain (NDCG), and the Area Under the magnified ROC (AUC-mROC). The results indicate that the discriminating abilities of the three metrics, AUC, AUPR, and NDCG, are significantly higher than those of other metrics.
The co-evolution of economic and ecological activities represents one of the fundamental challenges in the realm of sustainable development. This study on the word trends in mainstream newspapers from the UK and China reveals that both early-industrialised countries and latecomers follow three modes of economic and ecological co-evolution. First, both economic and ecological words demonstrate an S-shaped growth trajectory, and the mode underscores the importance of information propagation, whilst also highlighting the crucial role of self-organisation in the accept society. Second, the co-occurrence of these two type words exhibits a Z-shaped relationship: for two-thirds of the observed period, they display synergistic interactions, while the remaining time shows trade-offs. Lastly, the words related to ecological degradation follow M-shaped trajectories in parallel with economic growth, suggesting periodic disruptions and reconstructions in their interrelationships. Our findings contribute to a more nuanced understanding of the co-evolutionary mechanisms that govern collective behaviours in human society.
One unique feature of nonlinear dynamical systems is the existence of superharmonic and subharmonic resonances in addition to primary resonances. In this study, an effective vibration testing methodology is introduced for the experimental identification of these secondary resonances. The proposed method relies on phase-locked loop control combined with adaptive filters for online Fourier decomposition. To this end, the concept of a resonant phase lag is exploited to define the target phase lag to be followed during the experimental continuation process. The method is demonstrated using two systems featuring cubic nonlinearities, namely a numerical Duffing oscillator and a physical experiment comprising a clamped-clamped thin beam. The obtained results highlight that the control scheme can accurately characterize secondary resonances as well as track their backbone curves. A particularly salient feature of the developed algorithm is that, starting from the rest position, it facilitates an automatic and smooth dynamic state transfer toward one point of a subharmonic isolated branch, hence, inducing branch switching.
The magnetization manipulation by spin-orbit torques (SOTs) in nonmagnetic-metal (NM)/ferromagnet (FM) heterostructures has provided great opportunities for spin devices. Besides the conventional spin Hall effect (SHE) in heavy metals with strong spin-orbit coupling, the orbital currents have been proposed to be another promising approach to generate strong SOTs. Here, we systematically study the SOTs efficiency and its dependence on the FM thickness and different NM/FM interfaces in two prototypical Pt/Py and Ta/Py systems by inserting an ultrathin magnetic layer (0.4 nm thick ML = Co, Fe, Gd, and Ni). The dampinglike (DL) torque efficiency $\xi_{DL}$ is significantly enhanced by inserting ultrathin Co, Fe, and Ni layers and is noticeably suppressed for the Gd insertion. Moreover, the Ni insertion results in a sign change of the field-like (FL) torque in Pt/Py and substantially reduces $\xi_{DL}$ in Ta/Py. These results are likely related to the additional spin currents generated by combining the orbital Hall effect (OHE) in the NM and orbital-to-spin conversion in the ML insertion layer and/or their interfaces, especially for the Ni insertion. Our results demonstrate that inserting ultrathin ML can effectively manipulate the strength and sign of the SOTs, which would be helpful for spintronics applications.
Upgrades to advanced scientific user facilities such as next-generation x-ray light sources, nanoscience centers, and neutron facilities are revolutionizing our understanding of materials across the spectrum of the physical sciences, from life sciences to microelectronics. However, these facility and instrument upgrades come with a significant increase in complexity. Driven by more exacting scientific needs, instruments and experiments become more intricate each year. This increased operational complexity makes it ever more challenging for domain scientists to design experiments that effectively leverage the capabilities of and operate on these advanced instruments. Large language models (LLMs) can perform complex information retrieval, assist in knowledge-intensive tasks across applications, and provide guidance on tool usage. Using x-ray light sources, leadership computing, and nanoscience centers as representative examples, we describe preliminary experiments with a Context-Aware Language Model for Science (CALMS) to assist scientists with instrument operations and complex experimentation. With the ability to retrieve relevant information from facility documentation, CALMS can answer simple questions on scientific capabilities and other operational procedures. With the ability to interface with software tools and experimental hardware, CALMS can conversationally operate scientific instruments. By making information more accessible and acting on user needs, LLMs could expand and diversify scientific facilities' users and accelerate scientific output.
Recent advances in image data processing through machine learning and especially deep neural networks (DNNs) allow for new optimization and performance-enhancement schemes for radiation detectors and imaging hardware through data-endowed artificial intelligence. We give an overview of data generation at photon sources, deep learning-based methods for image processing tasks, and hardware solutions for deep learning acceleration. Most existing deep learning approaches are trained offline, typically using large amounts of computational resources. However, once trained, DNNs can achieve fast inference speeds and can be deployed to edge devices. A new trend is edge computing with less energy consumption (hundreds of watts or less) and real-time analysis potential. While popularly used for edge computing, electronic-based hardware accelerators ranging from general purpose processors such as central processing units (CPUs) to application-specific integrated circuits (ASICs) are constantly reaching performance limits in latency, energy consumption, and other physical constraints. These limits give rise to next-generation analog neuromorhpic hardware platforms, such as optical neural networks (ONNs), for high parallel, low latency, and low energy computing to boost deep learning acceleration.
A single-shot measurement technique for ultrafast phenomena with high throughput enables the capture of rare events within a short time scale, facilitating the exploration of rare ultrafast processes. Photonic time stretch stands out as a highly effective method for both detecting rapid events and achieving remarkable speed in imaging and ranging applications. The current time stretch method relies on costly passive mode-locked lasers with continuous and fixed spectra to capture fast transients and dilate their time scale using dispersion. This hinders the broad application of time stretch technology and presents synchronization challenges with ultrafast events for measurement. Here we report the first implementation of time stretch using continuous wave (CW) diode lasers with discrete and tunable spectra that are common in WDM optical communication. This approach offers the potential for more cost-effective and compact time stretch systems and simplifies laser synchronization with the input signal. Two different embodiments in the United States and Japan demonstrate the technique's operation and limitations, and potential applications to time stretch imaging and angular light scattering.
With their shielded 4f orbitals, rare-earth ions (REIs) offer optical and electron spin transitions with good coherence properties even when embedded in a host crystal matrix, highlighting their utility as promising quantum emitters and memories for quantum information processing. Among REIs, trivalent erbium (Er$^{3+}$) uniquely has an optical transition in the telecom C-band, ideal for transmission over optical fibers, and making it well-suited for applications in quantum communication. The deployment of Er$^{3+}$ emitters into a thin film TiO$_2$ platform has been a promising step towards scalable integration; however, like many solid-state systems, the deterministic spatial placement of quantum emitters remains an open challenge. We investigate laser annealing as a means to locally tune the optical resonance of Er$^{3+}$ emitters in TiO$_2$ thin films on Si. Using both nanoscale X-ray diffraction measurements and cryogenic photoluminescence spectroscopy, we show that tightly focused below-gap laser annealing can induce anatase to rutile phase transitions in a nearly diffraction-limited area of the films and improve local crystallinity through grain growth. As a percentage of the Er:TiO$_2$ is converted to rutile, the Er$^{3+}$ optical transition blueshifts by 13 nm. We explore the effects of changing laser annealing time and show that the amount of optically active Er:rutile increases linearly with laser power. We additionally demonstrate local phase conversion on microfabricated Si structures, which holds significance for quantum photonics.
Molecules are the smallest unit in matters that can exist independently, relatively stable, and maintain physical and chemical activities. The atomic species, alignment commands, and chemical bonds are key factors to dominate their structures and properties. Here we disclosed a general chemistry effect that the liquid metals can directly cut off oxygen-containing groups in various molecular matters at room temperature, and then recombine the remaining groups to form functional materials including nano semiconductors. Based on this unique mechanism, we proposed a basic tool and named it as liquid metal scissors for molecular directional clipping and functional transformation. As proof-of-concept, we demonstrated the capabilities of eGaIn scissors made of Ga and In particles, and revealed that the Ga on the surface of eGaIn could directly snatch oxygen atoms from various targeted substances such as H2O, CO2 or CH3OH molecules to form gallium oxides. As illustration, after clipping, the remaining hydrogen atoms of H2O molecules recombined to form H2, while the remaining groups of CH3OH lead to H2, carbon quantum dots, and other related substances. If needed, more molecules can also be manipulated via such scissors. This finding refreshes the basic knowledge of chemistry and suggests easygoing ways for molecular weaving, which may break up the limitations and single features of molecular substances. It also opens up a universal route for innovating future molecular chemical engineering, life science, energy and environment, and biomedicine.
Qingxiang Li, Zichen Li, Xuqian Li, Zengyun Hu, Aiguo Dai, Wenjie Dong, Boyin Huang, Zhihong Jiang, Panmao Zhai, Tianjun Zhou, Phil Jones As IPCC ARs stated, global warming is estimated based on the average from 1850 to 1900 (global average temperature of preindustrialization estimated from relatively sparse observations). Given the impossibility of massive increasing observation data in the early stages, accurately constraining this baseline has become an unresolved issue. Here we developed a new statistical physical model to quantify the contribution of external forcings to global warming as a "deterministic trend" of the surface temperature series (instead of as non-stationary processes that yield a stochastic trend) and constrained the reconstruction of the early time series (1850-1880). We find that the existing datasets slightly overestimated the temperature anomalies in this period, thus the speed of global warming since pre-industrialization is still underestimated.
The manipulation and control of nanoscale magnetic spin textures is of rising interest as they are potential foundational units in next-generation computing paradigms. Achieving this requires a quantitative understanding of the spin texture behavior under external stimuli using in situ experiments. Lorentz transmission electron microscopy (LTEM) enables real-space imaging of spin textures at the nanoscale, but quantitative characterization of in situ data is extremely challenging. Here, we present an AI-enabled phase-retrieval method based on integrating a generative deep image prior with an image formation forward model for LTEM. Our approach uses a single out-of-focus image for phase retrieval and achieves significantly higher accuracy and robustness to noise compared to existing methods. Furthermore, our method is capable of isolating sample heterogeneities from magnetic contrast, as shown by application to simulated and experimental data. This approach allows quantitative phase reconstruction of in situ data and can also enable near real-time quantitative magnetic imaging.
Zichao Lin, Yulin Yao, Zhangning Xie, Dongbai Xue, Tong Zhou, Zhaohui Tang, Lihua Lei, Tao Jin, Xiong Dun, Xiao Deng, Xinbin Cheng, Tongbao Li Natural constant-based metrology methods offer an effective approach to achieving traceability in nanometric measurements. The Cr grating, fabricated by atom lithography and featuring a pitch of $d=212.7705\pm0.0049~{\rm nm}$ traceable to the Cr transition frequency $^{7}S_{3}$ $\rightarrow$ $^{7}P_{4}^{0}$, demonstrates potential as a self-traceable length standard in nano-length metrology by grating interferometer. This research aims to analyze and engineer the diffraction characteristics that enhance the Cr grating as a self-traceable length standard within the length traceability chain based on the Cr transition frequency. Accordingly, we investigate the geometric morphology and diffraction characteristics of the Cr grating, analyzes the influence of the grating's polarization-sensitive characteristics on the Littrow configuration grating interferometer, and establishes the criteria for Cr grating fabrication. Experimentally, we fabricate an expanded Cr grating by scanning atom lithography, characterize its diffraction performance, and conduct preliminary verification of length measurement in a self-traceable grating interferometer. This work adheres to the international trend of flattened metrology development, offering a valuable reference for advancing subsequent metrological technologies throughout the new traceability chain.
Zhenjie Gu, Zhangning Xie, Zhikun Chang, Guangxu Xiao, Zhijun Yin, Zichao Lin, Tong Zhou, Lihua Lei, Tao Jin, Dongbai Xue, Xiao Deng, Xinbin Chen, Tongbao Li Traceability of precision instrument and measuring method is the core issue in metrology science. In the field of nanometer length measurement, the laser interferometers are usually used to trace the measurement value to the laser wavelength, but the laser wavelength is sensitive to the environment disturbance. Chromium self-traceable grating is an ideal nanometer length reference grating with pitch traceability, fabricated by the atomic lithography technique. The new nanometer length traceability chain can be established based on the pitch traceability of chromium self-traceable grating, which is often used to calibrate the systematic error of the atomic force microscope. In this paper, the metrological self-mixing grating interferometer based on the chromium self-traceable grating (SMGI-Cr) is firstly established, whose interfere phase is traceable to the pitch of the chromium self-traceable grating directly and traceable to the chromium atomic transition frequency of energy level 7 S 3 to 7 P 4 indirectly. The nanometer displacement measurement is also achieved by the SMGI-Cr. The measurement error is no more than 0.2366%, compared to a commercial interferometer.
To investigate whether bullying and psychological conditions are correlated, this study analyzed a survey of primary and secondary school students from Zigong City, Sichuan Province. A total of 95,545 students completed a personal information questionnaire, the Multidimensional Peer-Victimization Scale (MPVS), and eight other scales pertaining to various psychological problems. The data showed that 68,315 (71.5\%) participants experienced school bullying at varying degrees, indicating the prevalence of bullying among adolescents. The chi-square tests revealed a strong correlation between school bullying and psychological conditions. This correlation was further explored through multivariate logistic regression, showing that students who experienced mild bullying had a 3.10 times higher probability of emotional and behavioral problems, 4.06 times higher probability of experiencing prodromal symptoms of mental illness, 4.72 times higher probability of anxiety, 3.28 times higher probability of developing post-traumatic stress disorder (PTSD) , 4.07 times higher probability of poor sleep quality, 3.13 times higher probability of internet addiction, 2.18 times higher probability of poor mental health, and 3.64 times higher probability of depression than students who did not experience bullying. The corresponding probabilities for students who experienced severe bullying were 11.35, 17.35, 18.52, 12.59, 11.67, 12.03, 4.64, and 5.34 times higher, respectively. In conclusion, school bullying and psychological conditions are significantly correlated among primary and secondary school students, and the more severe the bullying, the higher the probability to suffer from psychological problems.
Unfolding different gender roles is preceding the efforts to reduce gender inequality. This paper analyzes COVID-19 family clusters outside Hubei Province in mainland China during the 2020 outbreak, revealing significant differences in spreading patterns across gender and family roles. Results show that men are more likely to be the imported cases of a family cluster, and women are more likely to be infected within the family. This finding provides new supportive evidence of the men as breadwinner and women as homemaker (MBWH) gender roles in China. Further analyses reveal that the MBWH pattern is stronger in eastern than in western China, stronger for younger than for elder people. This paper offers not only valuable references for formulating gender-differentiated epidemic prevention policies but also an exemplification for studying group differences in similar scenarios.
LHCb collaboration, R. Aaij, A.S.W. Abdelmotteleb, C. Abellan Beteta, F. Abudinén, C. Achard, T. Ackernley, B. Adeva, M. Adinolfi, P. Adlarson, H. Afsharnia, C. Agapopoulou, C.A. Aidala, Z. Ajaltouni, S. Akar, K. Akiba, P. Albicocco, J. Albrecht, F. Alessio, M. Alexander, et al (1303) The LHCb upgrade represents a major change of the experiment. The detectors have been almost completely renewed to allow running at an instantaneous luminosity five times larger than that of the previous running periods. Readout of all detectors into an all-software trigger is central to the new design, facilitating the reconstruction of events at the maximum LHC interaction rate, and their selection in real time. The experiment's tracking system has been completely upgraded with a new pixel vertex detector, a silicon tracker upstream of the dipole magnet and three scintillating fibre tracking stations downstream of the magnet. The whole photon detection system of the RICH detectors has been renewed and the readout electronics of the calorimeter and muon systems have been fully overhauled. The first stage of the all-software trigger is implemented on a GPU farm. The output of the trigger provides a combination of totally reconstructed physics objects, such as tracks and vertices, ready for final analysis, and of entire events which need further offline reprocessing. This scheme required a complete revision of the computing model and rewriting of the experiment's software.
Bacteria can swim upstream due to hydrodynamic interactions with the fluid flow in a narrow tube, and pose a clinical threat of urinary tract infection to patients implanted with catheters. Coatings and structured surfaces have been proposed as a way to suppress bacterial contamination in catheters. However, there is no surface structuring or coating approach to date that thoroughly addresses the contamination problem. Here, based on the physical mechanism of upstream swimming, we propose a novel geometric design, optimized by an AI model predicting in-flow bacterial dynamics. The AI method, based on Fourier neural operator, offers significant speedups over traditional simulation methods. Using Escherichia coli, we demonstrate the anti-infection mechanism in quasi-2D micro-fluidic experiments and evaluate the effectiveness of the design in 3Dprinted prototype catheters under clinical flow rates. Our catheter design shows 1-2 orders of magnitude improved suppression of bacterial contamination at the upstream end of the catheter, potentially prolonging the in-dwelling time for catheter use and reducing the overall risk of catheter-associated urinary tract infections.
Time stretch instruments have been exceptionally successful in discovering single-shot ultrafast phenomena such as optical rogue waves and have led to record-speed microscopy, spectroscopy, lidar, etc. These instruments encode the ultrafast events into the spectrum of a femtosecond pulse and then dilate the time scale of the data using group velocity dispersion. Generating as much as Tbit per second of data, they are ideal partners for deep learning networks which by their inherent complexity, require large datasets for training. However, the inference time scale of neural networks in the millisecond regime is orders of magnitude longer than the data acquisition rate of time stretch instruments. This underscores the need to explore means where some of the lower-level computational tasks can be done while the data is still in the optical domain. The Nonlinear Schrödinger Kernel computing addresses this predicament. It utilizes optical nonlinearities to map the data onto a new domain in which classification accuracy is enhanced, without increasing the data dimensions. One limitation of this technique is the fixed optical transfer function, which prevents training and generalizability. Here we show that the optical kernel can be effectively tuned and trained by utilizing digital phase encoding of the femtosecond laser pulse leading to a reduction of the error rate in data classification.
Wenkai Zhu, Yingmei Zhu, Tong Zhou, Xianpeng Zhang, Hailong Lin, Qirui Cui, Faguang Yan, Ziao Wang, Yongcheng Deng, Hongxin Yang, Lixia Zhao, Igor Žutić, Kirill D. Belashchenko, Kaiyou Wang Magnetic tunnel junctions (MTJs) with conventional bulk ferromagnets separated by a nonmagnetic insulating layer are key building blocks in spintronics for magnetic sensors and memory. A radically different approach of using atomically-thin van der Waals (vdW) materials in MTJs is expected to boost their figure of merit, the tunneling magnetoresistance (TMR), while relaxing the lattice-matching requirements from the epitaxial growth and supporting high-quality integration of dissimilar materials with atomically-sharp interfaces. We report TMR up to 192% at 10 K in all-vdW Fe3GeTe2/GaSe/Fe3GeTe2 MTJs. Remarkably, instead of the usual insulating spacer, this large TMR is realized with a vdW semiconductor GaSe. Integration of two-dimensional ferromagnets in semiconductor-based vdW junctions offers gate-tunability, bias dependence, magnetic proximity effects, and spin-dependent optical-selection rules. We demonstrate that not just the magnitude, but also the TMR sign is tuned by the applied bias or the semiconductor thickness, enabling modulation of highly spin-polarized carriers in vdW semiconductors.
JUNO Collaboration, Angel Abusleme, Thomas Adam, Shakeel Ahmad, Rizwan Ahmed, Sebastiano Aiello, Muhammad Akram, Abid Aleem, Tsagkarakis Alexandros, Fengpeng An, Qi An, Giuseppe Andronico, Nikolay Anfimov, Vito Antonelli, Tatiana Antoshkina, Burin Asavapibhop, João Pedro Athayde Marcondes de André, Didier Auguste, Weidong Bai, Nikita Balashov, et al (597) The main task of the Top Tracker detector of the neutrino reactor experiment Jiangmen Underground Neutrino Observatory (JUNO) is to reconstruct and extrapolate atmospheric muon tracks down to the central detector. This muon tracker will help to evaluate the contribution of the cosmogenic background to the signal. The Top Tracker is located above JUNO's water Cherenkov Detector and Central Detector, covering about 60% of the surface above them. The JUNO Top Tracker is constituted by the decommissioned OPERA experiment Target Tracker modules. The technology used consists in walls of two planes of plastic scintillator strips, one per transverse direction. Wavelength shifting fibres collect the light signal emitted by the scintillator strips and guide it to both ends where it is read by multianode photomultiplier tubes. Compared to the OPERA Target Tracker, the JUNO Top Tracker uses new electronics able to cope with the high rate produced by the high rock radioactivity compared to the one in Gran Sasso underground laboratory. This paper will present the new electronics and mechanical structure developed for the Top Tracker of JUNO along with its expected performance based on the current detector simulation.
Angel Abusleme, Thomas Adam, Shakeel Ahmad, Rizwan Ahmed, Sebastiano Aiello, Muhammad Akram, Abid Aleem, Tsagkarakis Alexandros, Fengpeng An, Qi An, Giuseppe Andronico, Nikolay Anfimov, Vito Antonelli, Tatiana Antoshkina, Burin Asavapibhop, João Pedro Athayde Marcondes de André, Didier Auguste, Weidong Bai, Nikita Balashov, Wander Baldini, et al (597) The Jiangmen Underground Neutrino Observatory (JUNO), the first multi-kton liquid scintillator detector, which is under construction in China, will have a unique potential to perform a real-time measurement of solar neutrinos well below the few MeV threshold typical for Water Cherenkov detectors. JUNO's large target mass and excellent energy resolution are prerequisites for reaching unprecedented levels of precision. In this paper, we provide estimation of the JUNO sensitivity to 7Be, pep, and CNO solar neutrinos that can be obtained via a spectral analysis above the 0.45 MeV threshold. This study is performed assuming different scenarios of the liquid scintillator radiopurity, ranging from the most opti mistic one corresponding to the radiopurity levels obtained by the Borexino experiment, up to the minimum requirements needed to perform the neutrino mass ordering determination with reactor antineutrinos - the main goal of JUNO. Our study shows that in most scenarios, JUNO will be able to improve the current best measurements on 7Be, pep, and CNO solar neutrino fluxes. We also perform a study on the JUNO capability to detect periodical time variations in the solar neutrino flux, such as the day-night modulation induced by neutrino flavor regeneration in Earth, and the modulations induced by temperature changes driven by helioseismic waves.
With the continuing advances in scientific instrumentation, scanning microscopes are now able to image physical systems with up to sub-atomic-level spatial resolutions and sub-picosecond time resolutions. Commensurately, they are generating ever-increasing volumes of data, storing and analysis of which is becoming an increasingly difficult prospect. One approach to address this challenge is through self-driving experimentation techniques that can actively analyze the data being collected and use this information to make on-the-fly measurement choices, such that the data collected is sparse but representative of the sample and sufficiently informative. Here, we report the Fast Autonomous Scanning Toolkit (FAST) that combines a trained neural network, a route optimization technique, and efficient hardware control methods to enable a self-driving scanning microscopy experiment. The key features of our method are that: it does not require any prior information about the sample, it has a very low computational cost, and that it uses generic hardware controls with minimal experiment-specific wrapping. We test this toolkit in numerical experiments and a scanning dark-field x-ray microscopy experiment of a $WSe_2$ thin film, where our experiments show that a FAST scan of <25% of the sample is sufficient to produce both a high-fidelity image and a quantitative analysis of the surface distortions in the sample. We show that FAST can autonomously identify all features of interest in the sample while significantly reducing the scan time, the volume of data acquired, and dose on the sample. The FAST toolkit is easy to apply for any scanning microscopy modalities and we anticipate adoption of this technique will empower broader multi-level studies of the evolution of physical phenomena with respect to time, temperature, or other experimental parameters.
Caimei Liu, Min Li, Zhimin Wang, Jun Hu, Nikolay Anfimov, Lei Fan, Alberto Garfagnini, Guanghua Gong, Shaojing Hou, Beatrice Jelmini, Xiaolu Ji, Xiaoshan Jiang, Denis Korablev, Tobias Lachenmaier, Si Ma, Xiaoyan Ma, Zhe Ning, Alexander G. Olshevskiy, Zhaoyuan Peng, Zhonghua Qin, et al (17) The Jiangmen underground neutrino observatory (JUNO) is a neutrino project with a 20-kton liquid scintillator detector located at 700-m underground. The large 20-inch PMTs are one of the crucial components of the JUNO experiment aiming to precision neutrino measurements with better than 3% energy resolution at 1 MeV. The excellent energy resolution and a large fiducial volume provide many exciting opportunities for addressing important topics in neutrino and astro-particle physics. With the container #D at JUNO Pan-Asia PMT testing and potting station, the features of waterproof potted 20-inch PMTs were measured with JUNO 1F3 electronics prototype in waveform and charge, which are valuable for better understanding on the performance of the waterproof potted PMTs and the JUNO 1F3 electronics. In this paper, basic features of JUNO 1F3 electronics prototype run at Pan-Asia will be introduced, followed by an analysis of the waterproof potted 20-inch PMTs and a comparison with the results from commercial electronics used by the container #A and #B.
Predictability is an emerging metric that quantifies the highest possible prediction accuracy for a given time series, being widely utilized in assessing known prediction algorithms and characterizing intrinsic regularities in human behaviors. Lately, increasing criticisms aim at the inaccuracy of the estimated predictability, caused by the original entropy-based method. In this brief report, we strictly prove that the time series predictability is equivalent to a seemingly unrelated metric called Bayes error rate that explores the lowest error rate unavoidable in classification. This proof bridges two independently developed fields, and thus each can immediately benefit from the other. For example, based on three theoretical models with known and controllable upper bounds of prediction accuracy, we show that the estimation based on Bayes error rate can largely solve the inaccuracy problem of predictability.
The Fisher-Kolmogorov-Petrovsky-Piskunov (FKPP) equation provides a mean-field theory of out-of-time-ordered commutators in locally interacting quantum chaotic systems at high energy density; in the systems with power-law interactions, the corresponding fractional-derivative FKPP equation provides an analogous mean-field theory. However, the fractional FKPP description is potentially subject to strong quantum fluctuation effects, so it is not clear a priori if it provides a suitable effective description for generic chaotic systems with power-law interactions. Here we study this problem using a model of coupled quantum dots with interactions decaying as $\frac{1}{r^{\alpha}}$, where each dot hosts $N$ degrees of freedom. The large $N$ limit corresponds to the mean-field description, while quantum fluctuations contributing to the OTOC can be modeled by $\frac{1}{N}$ corrections consisting of a cutoff function and noise. Within this framework, we show that the parameters of the effective theory can be chosen to reproduce the butterfly light cone scalings that we previously found for $N=1$ and generic finite $N$. In order to reproduce these scalings, the fractional index $\mu$ in the FKPP equation needs to be shifted from the naïve value of $\mu = 2\alpha - 1$ to a renormalized value $\mu = 2\alpha - 2$. We provide supporting analytic evidence for the cutoff model and numerical confirmation for the full fractional FKPP equation with cutoff and noise.
Today, most X-ray pixel detectors used at light sources transmit raw pixel data off the detector ASIC. With the availability of more advanced ASIC technology nodes for scientific application, more digital functionality from the computing domains (e.g., compression) can be integrated directly into a detector ASIC to increase data velocity. In this paper, we describe a lightweight, user-configurable detector ASIC digital architecture with on-chip compression which can be implemented in \SI130\nm technologies in a reasonable area on the ASIC periphery. In addition, we present a design to efficiently handle the variable data from the stream of parallel compressors. The architecture includes user-selectable lossy and lossless compression blocks. The impact of lossy compression algorithms is evaluated on simulated and experimental X-ray ptychography datasets. This architecture is a practical approach to increase pixel detector frame rates towards the continuous \SI1\MHz regime for not only coherent imaging techniques such as ptychography, but also for other diffraction techniques at X-ray light sources.
Deep saline aquifers are one of the best options for large-scale and long-term hydrogen storage. Predicting the diffusion coefficient of hydrogen molecules at the conditions of saline aquifers is critical for modelling hydrogen storage. The diffusion coefficient of hydrogen molecules in chloride brine with different cations ($\mathrm{Na}^+$, $\mathrm{K}^+$, $\mathrm{Ca}^{2+}$) containing up to 5 $\mathrm{mol/kg_{H_2O}}$ concentration is numerically investigated using molecular dynamics (MD) simulation. A wide range of pressure (1-218 atm) and temperature (298-648 K) conditions is applied to cover the realistic operational conditions of the aquifers. We find that the temperature, pressure and properties of ions (compositions and concentrations) affect the hydrogen diffusion coefficient. An Arrhenius behavior of the effect of temperature on the diffusion coefficient has been observed with the temperature independent parameters fitted using the ion concentration under constant pressure. However, it is noted that the pressure strongly affects the diffusive behavior of hydrogen at the high temperature ($\geq$ 400 K) regime, indicating the inaccuracy of the Arrhenius model. Hence, we combine the obtained MD results with four models of machine learning (ML), including linear regression (LR), random forest (RF), extra tree (ET) and gradient boosting (GB) to provide effective predictions on the hydrogen diffusion. The resultant combination of GB model with MD data predicts the diffusion of hydrogen more effectively as compared to the Arrhenius model and other ML models. Moreover, a $post hoc$ analysis (feature importance rank) has been performed to extract the correlation between physical descriptors and simulation results from ML models.
Equal pay is an essential component of gender equality, one of the Sustainable Development Goals of the United Nations. Using resume data of over ten million Chinese online job seekers in 2015, we study the current gender pay gap in China. The results show that on average women only earned 71.57\% of what men earned in China. The gender pay gap exists across all age groups and educational levels. Contrary to the commonly held view that developments in education, economy, and a more open culture would reduce the gender pay gap, the fusion analysis of resume data and socio-economic data presents that they have not helped reach the gender pay equality in China. China seems to be stuck in a place where traditional methods cannot make further progress. Our analysis further shows that 81.47\% of the variance in the gender pay gap can be potentially attributed to discrimination. In particular, compared with the unmarried, both the gender pay gap itself and proportion potentially attributed to discrimination of the married are larger, indicating that married women suffer greater inequality and more discrimination than unmarried ones. Taken together, we suggest that more research attention should be paid to the effect of discrimination in understanding gender pay gap based on the family constraint theory. We also suggest the Chinese government to increase investment in family-supportive policies and grants in addition to female education.
JUNO Collaboration, Angel Abusleme, Thomas Adam, Shakeel Ahmad, Rizwan Ahmed, Sebastiano Aiello, Muhammad Akram, Fengpeng An, Qi An, Giuseppe Andronico, Nikolay Anfimov, Vito Antonelli, Tatiana Antoshkina, Burin Asavapibhop, João Pedro Athayde Marcondes de André, Didier Auguste, Nikita Balashov, Wander Baldini, Andrea Barresi, Davide Basilico, et al (582) We present the detection potential for the diffuse supernova neutrino background (DSNB) at the Jiangmen Underground Neutrino Observatory (JUNO), using the inverse-beta-decay (IBD) detection channel on free protons. We employ the latest information on the DSNB flux predictions, and investigate in detail the background and its reduction for the DSNB search at JUNO. The atmospheric neutrino induced neutral current (NC) background turns out to be the most critical background, whose uncertainty is carefully evaluated from both the spread of model predictions and an envisaged \textitin situ measurement. We also make a careful study on the background suppression with the pulse shape discrimination (PSD) and triple coincidence (TC) cuts. With latest DSNB signal predictions, more realistic background evaluation and PSD efficiency optimization, and additional TC cut, JUNO can reach the significance of 3$\sigma$ for 3 years of data taking, and achieve better than 5$\sigma$ after 10 years for a reference DSNB model. In the pessimistic scenario of non-observation, JUNO would strongly improve the limits and exclude a significant region of the model parameter space.
Link prediction is a paradigmatic and challenging problem in network science, which attempts to uncover missing links or predict future links, based on known topology. A fundamental but still unsolved issue is how to choose proper metrics to fairly evaluate prediction algorithms. The area under the receiver operating characteristic curve (AUC) and the balanced precision (BP) are the two most popular metrics in early studies, while their effectiveness is recently under debate. At the same time, the area under the precision-recall curve (AUPR) becomes increasingly popular, especially in biological studies. Based on a toy model with tunable noise and predictability, we propose a method to measure the discriminating abilities of any given metric. We apply this method to the above three threshold-free metrics, showing that AUC and AUPR are remarkably more discriminating than BP, and AUC is slightly more discriminating than AUPR. The result suggests that it is better to simultaneously use AUC and AUPR in evaluating link prediction algorithms, at the same time, it warns us that the evaluation based only on BP may be unauthentic. This article provides a starting point towards a comprehensive picture about effectiveness of evaluation metrics for link prediction and other classification problems.
Nitrogen-vacancy (NV) center in diamond is a promising quantum sensor with remarkably versatile sensing capabilities. While scanning NV magnetometry is well-established, NV electrometry has been so far limited to bulk diamonds. Here we demonstrate imaging external alternating (AC) and direct (DC) electric fields with a single NV at the apex of a diamond scanning tip under ambient conditions. A strong electric field screening effect is observed at low frequencies due to charge noise on the surface. We quantitatively measure its frequency dependence, and overcome this screening by mechanically oscillating the tip for imaging DC fields. Our scanning NV electrometry achieved an AC E-field sensitivity of 26 mV um^(-1) Hz^(-1/2), a DC E-field gradient sensitivity of 2 V um^(-2) Hz^(-1/2), and sub-100 nm resolution limited by the NV-sample distance. Our work represents an important step toward building a scanning-probe-based multimodal quantum sensing platform.
Leily Kiani, Tong Zhou, Seung-Whan Bahk, Jake Bromage, David Bruhwiler, E. Michael Campbell, Zenghu Chang, Enam Chowdhury, Michael Downer, Qiang Du, Eric Esarey, Almantas Galvanauskas, Thomas Galvin, Constantin Hafner, Dieter Hoffmann, Chan Joshi, Manoj Kanskar, Wei Lu, Carmen Menoni, Michael Messerly, et al (17) Large scale laser facilities are needed to advance the energy frontier in high energy physics and accelerator physics. Laser plasma accelerators are core to advanced accelerator concepts aimed at reaching TeV electron electron colliders. In these facilities, intense laser pulses drive plasmas and are used to accelerate electrons to high energies in remarkably short distances. A laser plasma accelerator could in principle reach high energies with an accelerating length that is 1000 times shorter than in conventional RF based accelerators. Notionally, laser driven particle beam energies could scale beyond state of the art conventional accelerators. LPAs have produced multi GeV electron beams in about 20 cm with relative energy spread of about 2 percent, supported by highly developed laser technology. This validates key elements of the US DOE strategy for such accelerators to enable future colliders but extending best results to date to a TeV collider will require lasers with higher average power. While the per pulse energies envisioned for laser driven colliders are achievable with current lasers, low laser repetition rates limit potential collider luminosity. Applications will require rates of kHz to tens of kHz at Joules of energy and high efficiency, and a collider would require about 100 such stages, a leap from current Hz class LPAs. This represents a challenging 1000 fold increase in laser repetition rates beyond current state of the art. This whitepaper describes current research and outlook for candidate laser systems as well as the accompanying broadband and high damage threshold optics needed for driving future advanced accelerators.
C. B. Adams, N. Aggarwal, A. Agrawal, R. Balafendiev, C. Bartram, M. Baryakhtar, H. Bekker, P. Belov, K. K. Berggren, A. Berlin, C. Boutan, D. Bowring, D. Budker, A. Caldwell, P. Carenza, G. Carosi, R. Cervantes, S. S. Chakrabarty, S. Chaudhuri, T. Y. Chen, et al (135) Axions are well-motivated dark matter candidates with simple cosmological production mechanisms. They were originally introduced to solve the strong CP problem, but also arise in a wide range of extensions to the Standard Model. This Snowmass white paper summarizes axion phenomenology and outlines next-generation laboratory experiments proposed to detect axion dark matter. There are vibrant synergies with astrophysical searches and advances in instrumentation including quantum-enabled readout, high-Q resonators and cavities and large high-field magnets. This white paper outlines a clear roadmap to discovery, and shows that the US is well-positioned to be at the forefront of the search for axion dark matter in the coming decade.
C. Benedetti, S. S. Bulanov, E. Esarey, C. G. R. Geddes, A. J. Gonsalves, A. Huebl, R. Lehe, K. Nakamura, C. B. Schroeder, D. Terzani, J. van Tilborg, M. Turner, J.-L. Vay, T. Zhou, F. Albert, J. Bromage, E. M. Campbell, D. H. Froula, J. P. Palastro, J. Zuegel, et al (23) White paper to the Proceedings of the U.S. Particle Physics Community Planning Exercise (Snowmass 2021): Linear colliders based on laser-plasma accelerators
Yongming Luo, Yanshan Zhuang, Zhongshu Feng, Haodong Fan, Birui Wu, Menghao Jing, Ziji Shao, Hai Li, Ru Bai, Yizheng Wu, Ningning Wang, Tiejun Zhou L10-FePt distinguishes itself for its ultrahigh perpendicular magnetic anisotropy (PMA), which enables memory cells with sufficient thermal stability to scale down to 3 nm. The recently discovered "bulk" spin-orbit torques in L10-FePt provide an efficient and scalable way to manipulate the L10-FePt magnetization. However, the existence of external field during the switching limits its practical application, and therefore field-free switching of the L10-FePt is in highly demand. In this manuscript, we demonstrate the field-free switching of the L10-FePt by growing it on vicinal MgO (001) substrates. This method is different from previously established strategies, as it does not need to add other functional layers or create asymmetry in the film structure. We demonstrate the field-free switching is robust and can withstand strong field disturbance up to ~1 kOe. The dependence on vicinal angle, film thickness, and growth temperature demonstrated a wide operation window for the field-free switching of the L10-FePt. We confirmed that the physical origin of the field-free switching is the vicinal surface-induced the tilted anisotropy of L10-FePt. We quantitatively characterize the spin-orbit torques in the L10-FePt films, and found the spin-orbit torques are not significantly influenced by the lattice strain from vicinal substrates. Our results extend beyond the established strategies to realize field-free switching, and potentially could be applied to other magnetic and antiferromagnetic systems.
Reza Ebadi, Mason C. Marshall, David F. Phillips, Johannes Cremer, Tao Zhou, Michael Titze, Pauli Kehayias, Maziar Saleh Ziabari, Nazar Delegan, Surjeet Rajendran, Alexander O. Sushkov, F. Joseph Heremans, Edward S. Bielejec, Martin V. Holt, Ronald L. Walsworth Next-generation dark matter (DM) detectors searching for weakly interacting massive particles (WIMPs) will be sensitive to coherent scattering from solar neutrinos, demanding an efficient background-signal discrimination tool. Directional detectors improve sensitivity to WIMP DM despite the irreducible neutrino background. Wide-bandgap semiconductors offer a path to directional detection in a high-density target material. A detector of this type operates in a hybrid mode. The WIMP or neutrino-induced nuclear recoil is detected using real-time charge, phonon, or photon collection. The directional signal, however, is imprinted as a durable sub-micron damage track in the lattice structure. This directional signal can be read out by a variety of atomic physics techniques, from point defect quantum sensing to x-ray microscopy. In this white paper, we present the detector principle and review the status of the experimental techniques required for directional readout of nuclear recoil tracks. Specifically, we focus on diamond as a target material; it is both a leading platform for emerging quantum technologies and a promising component of next-generation semiconductor electronics. Based on the development and demonstration of directional readout in diamond over the next decade, a future WIMP detector will leverage or motivate advances in multiple disciplines towards precision dark matter and neutrino physics.
Following the explosive growth of global data, there is an ever-increasing demand for high-throughput optical fiber communication (OFC) systems to perform massive data transmission and processing. Existing OFC methods mainly rely on electronic circuits for data processing, which severely limits the communication throughput. Though considered promising for the next-generation high-speed fiber communication, all-optical OFC remains unachievable due to serious challenges in effective optical computing, system modeling and configuring. Here we propose an end-to-end photonic encoder-decoder (PED) processor which maps the physical system of OFC into an optical generative neural network. By modeling the OFC transmission process as the variation in the constructed optical latent space, the PED learns noise-resistant coding schemes via unsupervised optimization. With multi-layer parametric diffractive neural networks, the PED establishes a large-scale and high-throughput optical computing framework that integrates the main OFC computations including coding, encryption and compression to the optical domain. The whole system improves the latency of computation in OFC systems by five orders of magnitude compared with the state-of-the-art device. On benchmarking datasets, the PED experimentally achieves up to 32% reduction in transmission error ratio (ER) than on-off keying (OOK), one of the mainstream methods with the lowest ER in general transmission. As we demonstrate on medical data, the PED increases the transmission throughput by two orders of magnitude than 8-level pulse amplitude modulation (PAM-8). We believe the proposed photonic encoder-decoder processor not only paves the way to the next-generation all-optical OFC systems, but also promotes a wide range of AI-based physical system designs.
Large perpendicular magnetic anisotropy (MA) is highly desirable for realizing atomic-scale magnetic data storage which represents the ultimate limit of the density of magnetic recording. In this work, we studied the MA of transition metal dimers Co-Os, Co-Co and Os-Os adsorbed on two-dimensional ferroelectric In2Se3 (In2Se3-CoOs, In2Se3-OsCo, In2Se3-CoCo and In2Se3-OsOs) by first-principles calculations. It is found that the Co-Os dimer in In2Se3-CoOs has large total perpendicular magnetic anisotropy energy (MAE) of ~ 40 meV. In particular, the MAE arising from Os atom is up to ~ 60 meV. The large MAE is attributed to the high spin-orbit coupling constant and the onefold coordination of Os atom. In addition, the MA of the dimers can be tuned by the polarization reversal of In2Se3. When the polarization is upward, the easy-axis directions of MA in In2Se3-OsCo, In2Se3-CoCo and In2Se3-OsOs are all in-plane, while the directions become perpendicular as the polarization is switched to downward. For the In2Se3-CoOs, switching polarization from upward to downward enhance the perpendicular MA from ~ 20 meV to ~ 40 meV. Based on the second-order perturbation theory, we confirm that the exchange splitting of dxy/dx2-y2 and dxz/dyz orbitals as well as the occupation of dz2 orbital at the vicinity of Fermi level play important roles in the changes of MA with the reversal of FE polarization of In2Se3.
Community detection is a significant and challenging task in network science. Nowadays, plenty of attention has been paid on local methods for community detection. Greedy expanding is a popular and efficient class of local algorithms, which typically starts from some selected central nodes and expands those nodes to obtain provisional communities by optimizing a certain quality function. In this paper, we propose a novel index, called local superiority index (LSI), to identify central nodes. In the process of expansion, we apply the fitness function to estimate the quality of provisional communities and ensure that all provisional communities must be weak communities. Evaluation based on the normalized mutual information suggests: (1) LSI is superior to the global maximal degree index and the local maximal degree index on most considered networks; (2) The greedy algorithm based on LSI is better than the classical fast algorithm on most considered networks.
In mesoscopic electronic systems, the Fabry-Pérot (FP) oscillation is observed in various 1D devices. As for higher dimensions, numerous transverse channels usually lead to dephasing that quenches the overall oscillation of the conductance. Up to now, the FP oscillation in 2D electronic systems is only reported in graphene-based devices, and very recently, the \emphpn junctions of inverted InAs/GaSb double quantum well [Phys. Rev. X 10, 031007 (2020)]. In the latter, the band shape of a sombrero hat plays an essential role, which introduces a novel mechanism of electron-hole hybridization for the 2D FP oscillation. In this work, we propose that such a scenario can be generalized to the 2D planar junction composed of low-density Rashba gas, where the band bottom possesses a sombrero hat shape as well. We show that the backscattering between the outer and inner Fermi circles dominates the FP interference and significantly suppresses the dephasing effect between different transverse channels, which leads to a visible oscillation of the tunneling conductance. Specially, the visibility of the oscillating pattern can be enhanced by applying interface barriers, in contrast to that in the InAs/GaSb double quantum well. Our results provide a promising way for the implementation of the FP oscillation in the 2D electron gas.
Occupational segregation is widely considered as one major reason leading to the gender discrimination in labor market. Using large-scale Chinese resume data of online job seekers, we uncover an interesting phenomenon that occupations with higher proportion of men have smaller gender wage gap measured by the female-male ratio on wage. We further show that the severity of occupational segregation in China is low both overall and regionally, and the inter-occupational discrimination is much smaller than the intra-occupational discrimination. That is to say, Chinese women do not face large barriers when changing their occupations. Accordingly, we suggest Chineses women a new way to narrow the gender wage gap: to join male-dominated occupations. Meanwhile, it is worth noticing that although the gender wage gap is smaller in male-dominated occupations, it does not mean that the gender discrimination is smaller there.
Modern biomedical applications such as targeted drug delivery require a delivery system capable of enhanced transport beyond that of passive Brownian diffusion. In this work an osmotic mechanism for the propulsion of a vesicle immersed in a viscous fluid is proposed. By maintaining a steady-state solute gradient inside the vesicle, a seepage flow of the solvent (e.g., water) across the semipermeable membrane is generated which in turn propels the vesicle. We develop a theoretical model for this vesicle-solute system in which the seepage flow is described by a Darcy flow. Using the reciprocal theorem for Stokes flow it is shown that the seepage velocity at the exterior surface of the vesicle generates a thrust force which is balanced by the hydrodynamic drag such that there is no net force on the vesicle. We characterize the motility of the vesicle in relation to the concentration distribution of the solute confined inside the vesicle. Any osmotic solute is able to propel the vesicle so long as a concentration gradient is present. In the present work, we propose active Brownian particles (ABPs) as a solute. To maintain a symmetry-breaking concentration gradient, we consider ABPs with spatially varying swim speed and ABPs with constant properties but under the influence of an orienting field. In particular, it is shown that at high activity the vesicle velocity is $\boldsymbol{U}\sim [K_\perp /(\eta_e\ell_m) ]\int \Pi_0^\mathrm{swim} \boldsymbol{n} d\Omega $, where $\Pi_0^\mathrm{swim}$ is the swim pressure just outside the thin accumulation boundary layer on the interior vesicle surface, $\boldsymbol{n}$ is the unit normal vector of the vesicle boundary, $K_\perp$ is the membrane permeability, $\eta_e$ is the viscosity of the solvent, and $\ell_m$ is the membrane thickness.
The great expansion of high-speed rail (HSR) in China facilitates communications and interactions among people across cities. Despite extensive literature documenting the effects of HSR on a variety of variables such as local economic development, research collaboration, tourism, and capital mobility, not much is known about how HSR affects the flow of well-educated workers, says talents. Here we estimate talent flow among Chinese cities based on large-scale resume data of online job seekers and explore how it is affected by HSR. Specifically, we employ both a multiple linear regression model that controls for several socioeconomic factors and a two-stage least square regression model that instruments the introduction of HSR to a city to address endogeneity concerns. We find that the introduction of HSR has an overall positive effect on the talent net inflow of a city although both inflow and outflow are increased. Moreover, the effects of HSR on talent flow are rather heterogeneous for cities with different levels of economic development and for talents working in different industries. Specifically, developed cities benefit from HSR, whereas less-developed cities are relatively impaired. Cities connected by HSR show significant advantage in attracting talents from secondary and tertiary industries. These substantial but heterogeneous effects of HSR suggest a critical need for more comprehensive thinking about the long-term benefits of entering the HSR network, especially for less-developed cities and those with comparative advantage in manufacturing and service industries.
Jesse Liu, Kristin Dona, Gabe Hoshino, Stefan Knirck, Noah Kurinsky, Matthew Malaker, David W. Miller, Andrew Sonnenschein, Mohamed H. Awida, Peter S. Barry, Karl K. Berggren, Daniel Bowring, Gianpaolo Carosi, Clarence Chang, Aaron Chou, Rakshya Khatiwada, Samantha Lewis, Juliang Li, Sae Woo Nam, Omid Noroozian, et al (1) We introduce the Broadband Reflector Experiment for Axion Detection (BREAD) conceptual design and science program. This haloscope plans to search for bosonic dark matter across the [10$^{-3}$, 1] eV ([0.24, 240] THz) mass range. BREAD proposes a cylindrical metal barrel to convert dark matter into photons, which a novel parabolic reflector design focuses onto a photosensor. This unique geometry enables enclosure in standard cryostats and high-field solenoids, overcoming limitations of current dish antennas. A pilot 0.7 m$^{2}$ barrel experiment planned at Fermilab is projected to surpass existing dark photon coupling constraints by over a decade with one-day runtime. Axion sensitivity requires $<10^{-20}$ W/$\sqrt{\textrm{Hz}}$ sensor noise equivalent power with a 10 T solenoid and 10 m$^{2}$ barrel. We project BREAD sensitivity for various sensor technologies and discuss future prospects.
Dual-band infrared photodetectors (DBIPs) can discriminate desired signals from complex scenes and thus are highly expected for threat-warning, remote sensing, and astronomy applications. Conventional DBIPs with high-performances are, however, typically based on semiconductor thin films, but remain the challenges of complex spatial align, expensive growth and cooling requirement. Here, we report a tunable graphene plasmonic photodetector with dual-band infrared spectral selectivity driven by ferroelectric superdomain. The periodic ferroelectric polarization array with nanoscale ring shapes provides ultrahigh electrostatic field for spatially doping of monolayer graphene to desired patterns, and is further used to excite and confine intrinsic graphene plasmons. Our devices exhibit tunable resonance photoresponse in both two bands of 3.7-16.3 um and 15.1-52.1 um. The numerical calculations show that our devices own ultrahigh responsivities of 667-1080 A W-1 at room temperature in range of 5-50 um. Our devices make possible the applications of infrared imaging system and both stationary and motion state of objects detection. These investigations provide a novel approach for advanced infrared system construction by employing simple, low-cost, uncooling multispectral detectors array.
Using multimodal Magnetic Resonance Imaging (MRI) is necessary for accurate brain tumor segmentation. The main problem is that not all types of MRIs are always available in clinical exams. Based on the fact that there is a strong correlation between MR modalities of the same patient, in this work, we propose a novel brain tumor segmentation network in the case of missing one or more modalities. The proposed network consists of three sub-networks: a feature-enhanced generator, a correlation constraint block and a segmentation network. The feature-enhanced generator utilizes the available modalities to generate 3D feature-enhanced image representing the missing modality. The correlation constraint block can exploit the multi-source correlation between the modalities and also constrain the generator to synthesize a feature-enhanced modality which must have a coherent correlation with the available modalities. The segmentation network is a multi-encoder based U-Net to achieve the final brain tumor segmentation. The proposed method is evaluated on BraTS 2018 dataset. Experimental results demonstrate the effectiveness of the proposed method which achieves the average Dice Score of 82.9, 74.9 and 59.1 on whole tumor, tumor core and enhancing tumor, respectively across all the situations, and outperforms the best method by 3.5%, 17% and 18.2%.
Almost all the media the particles move in are non-static. Depending on the expected resolution of the studied dynamics and the amplitude of the displacement of the media, sometimes the non-static behaviours of the media can not be ignored. In this paper, we build the model describing Lévy walks in non-static media, where the physical and comoving coordinates are connected by scale factor. We derive the equation governing the probability density function of the position of the particles in comoving coordinate. Using the Hermite orthogonal polynomial expansions, some statistical properties are obtained, such as mean squared displacements (MSDs) in both coordinates and kurtosis. For some representative non-static media and Lévy walks, the asymptotic behaviors of MSDs in both coordinates are analyzed in detail. The stationary distributions and mean first passage time for some cases are also discussed through numerical simulations.
Social media and online navigation bring us enjoyable experience in accessing information, and simultaneously create information cocoons (ICs) in which we are unconsciously trapped with limited and biased information. We provide a formal definition of IC in the scenario of online navigation. Subsequently, by analyzing real recommendation networks extracted from Science, PNAS and Amazon websites, and testing mainstream algorithms in disparate recommender systems, we demonstrate that similarity-based recommendation techniques result in ICs, which suppress the system navigability by hundreds of times. We further propose a flexible recommendation strategy that solves the IC-induced problem and improves retrieval accuracy in navigation, demonstrated by simulations on real data and online experiments on the largest video website in China.
JUNO collaboration, Angel Abusleme, Thomas Adam, Shakeel Ahmad, Rizwan Ahmed, Sebastiano Aiello, Muhammad Akram, Fengpeng An, Qi An, Giuseppe Andronico, Nikolay Anfimov, Vito Antonelli, Tatiana Antoshkina, Burin Asavapibhop, João Pedro Athayde Marcondes de André, Didier Auguste, Andrej Babic, Wander Baldini, Andrea Barresi, Davide Basilico, et al (583) JUNO is a massive liquid scintillator detector with a primary scientific goal of determining the neutrino mass ordering by studying the oscillated anti-neutrino flux coming from two nuclear power plants at 53 km distance. The expected signal anti-neutrino interaction rate is only 60 counts per day, therefore a careful control of the background sources due to radioactivity is critical. In particular, natural radioactivity present in all materials and in the environment represents a serious issue that could impair the sensitivity of the experiment if appropriate countermeasures were not foreseen. In this paper we discuss the background reduction strategies undertaken by the JUNO collaboration to reduce at minimum the impact of natural radioactivity. We describe our efforts for an optimized experimental design, a careful material screening and accurate detector production handling, and a constant control of the expected results through a meticulous Monte Carlo simulation program. We show that all these actions should allow us to keep the background count rate safely below the target value of 10 Hz in the default fiducial volume, above an energy threshold of 0.7 MeV.
Jingwen Ma, Taojie Zhou, Mingchu Tang, Haochuan Li, Zhan Zhang, Xiang Xi, Mickael Martin, Thierry Baron, Huiyun Liu, Zhaoyu Zhang, Siming Chen, Xiankai Sun Robust laser sources are a fundamental building block for contemporary information technologies. Originating from condensed-matter physics, the concept of topology has recently entered the realm of optics, offering fundamentally new design principles for lasers with enhanced robustness. In analogy to the well-known Majorana fermions in topological superconductors, Dirac-vortex states have recently been investigated in passive photonic systems and are now considered as a promising candidate for single-mode large-area lasers. Here, we experimentally realize the first Dirac-vortex topological lasers in InAs/InGaAs quantum-dot materials monolithically grown on a silicon substrate. We observe room-temperature continuous-wave single-mode linearly polarized vertical laser emission at a telecom wavelength. Most importantly, we confirm that the wavelength of the Dirac-vortex laser is topologically robust against variations in the cavity size, and its free spectral range defies the universal inverse scaling law with the cavity size. These lasers will play an important role in CMOS-compatible photonic and optoelectronic systems on a chip.
A variety of polymeric surfaces, such as anti-corrosion coatings and polymer-modified asphalts, are prone to blistering when exposed to moisture and air. As water and oxygen diffuse through the material, dissolved species are produced, which generate osmotic pressure that deforms and debonds the coating.These mechanisms are experimentally well-supported; however, comprehensive macroscopic models capable of predicting the formation osmotic blisters, without extensive data-fitting, is scant. Here, we develop a general mathematical theory of blistering and apply it to the failure of anti-corrosion coatings on carbon steel. The model is able to predict the irreversible, nonlinear blister growth dynamics, which eventually reaches a stable state, ruptures, or undergoes runaway delamination, depending on the mechanical and adhesion properties of the coating. For runaway delamination, the theory predicts a critical delamination length, beyond which unstable corrosion-driven growth occurs. The model is able to fit multiple sets of blister growth data with no fitting parameters. Corrosion experiments are also performed to observe undercoat rusting on carbon steel, which yielded trends comparable with model predictions. The theory is used to define three dimensionless numbers which can be used for engineering design of elastic coatings capable of resisting visible deformation, rupture, and delamination.
The increasing data availability and imported analyzing tools from computer science and physical science have sharply changed traditional methodologies of social sciences, leading to a new branch named computational socioeconomics that studies various phenomena in socioeconomic development by using quantitative methods based on large-scale real-world data. Sited on recent publications, this Perspective will introduce three representative methods: (i) natural data analyses, (ii) large-scale online experiments, and (iii) integration of big data and surveys. This Perspective ends up with in-depth discussion on the limitations and challenges of the above-mentioned emerging methods.
Modern scientific research has become largely a cooperative activity in the Internet age. We build a simulation model to understand the population-level creativity based on the heuristic ant colony algorithm. Each researcher has two heuristic parameters characterizing the goodness of his own judgments and his trust on literature. In a population with all kinds of researchers, we find that as the problem scale increases, the contributor distribution significantly shifts from the independent regime of relying on one's own judgments to the cooperative regime of more closely following the literature. The distribution also changes with the stage of the research problem and the computing power available. Our work provides some preliminary understanding and guidance for the dynamical process of cooperative scientific research in various disciplines.
Many active matter systems are known to perform Lévy walks during migration or foraging. Such superdiffusive transport indicates long-range correlated dynamics. These behavior patterns have been observed for microswimmers such as bacteria in microfluidic experiments, where Gaussian noise assumptions are insufficient to explain the data. We introduce \textitactive Lévy swimmers to model such behavior. The focus is on ideal swimmers that only interact with the walls but not with each other, which reduces to the classical Lévy walk model but now under confinement. We study the density distribution in the channel and force exerted on the walls by the Lévy swimmers, where the boundaries require proper explicit treatment. We analyze stronger confinement via a set of coupled kinetics equations and the swimmers' stochastic trajectories. Previous literature demonstrated that power-law scaling in a multiscale analysis in free space results in a fractional diffusion equation. We show that in a channel, in the weak confinement limit active Lévy swimmers are governed by a modified Riesz fractional derivative. Leveraging recent results on fractional fluxes, we derive steady state solutions for the bulk density distribution of active Lévy swimmers in a channel, and demonstrate that these solutions agree well with particle simulations. The profiles are non-uniform over the entire domain, in contrast to constant-in-the-bulk profiles of active Brownian and run-and-tumble particles. Our theory provides a mathematical framework for Lévy walks under confinement with sliding no-flux boundary conditions and provides a foundation for studies of interacting active Lévy swimmers.
JUNO Collaboration, Angel Abusleme, Thomas Adam, Shakeel Ahmad, Rizwan Ahmed, Sebastiano Aiello, Muhammad Akram, Fengpeng An, Guangpeng An, Qi An, Giuseppe Andronico, Nikolay Anfimov, Vito Antonelli, Tatiana Antoshkina, Burin Asavapibhop, João Pedro Athayde Marcondes de André, Didier Auguste, Andrej Babic, Wander Baldini, Andrea Barresi, et al (587) The OSIRIS detector is a subsystem of the liquid scintillator fillling chain of the JUNO reactor neutrino experiment. Its purpose is to validate the radiopurity of the scintillator to assure that all components of the JUNO scintillator system work to specifications and only neutrino-grade scintillator is filled into the JUNO Central Detector. The aspired sensitivity level of $10^{-16}$ g/g of $^{238}$U and $^{232}$Th requires a large ($\sim$20 m$^3$) detection volume and ultralow background levels. The present paper reports on the design and major components of the OSIRIS detector, the detector simulation as well as the measuring strategies foreseen and the sensitivity levels to U/Th that can be reached in this setup.
Women are set back in the labor market after becoming mother. Intuitively, childcare services are able to promote women employment as they may reconciliate the motherhood penalty. However, most known studies concentrated on the effects of childcare services on fertility rate, instead of quantitative analyses about the effects on women employment. Using worldwide panel data and Chinese data at province level, this paper unfolds the quantitative relationship between childcare services and women employment, that is, the attendance rate of childcare services is positively correlated with the relative employment rate of women to men. Further analysis suggests that such a positive impact may largely resulted from breaking the vulnerable employment dilemma.