Struct rustc_const_eval::const_eval::dummy_machine::DummyMachine
source · pub struct DummyMachine;
Trait Implementations§
source§impl HasStaticRootDefId for DummyMachine
impl HasStaticRootDefId for DummyMachine
source§fn static_def_id(&self) -> Option<LocalDefId>
fn static_def_id(&self) -> Option<LocalDefId>
Returns the
DefId
of the static item that is currently being evaluated.
Used for interning to be able to handle nested allocations.source§impl<'tcx> Machine<'tcx> for DummyMachine
impl<'tcx> Machine<'tcx> for DummyMachine
§type Provenance = CtfeProvenance
type Provenance = CtfeProvenance
Pointers are “tagged” with provenance information; typically the
AllocId
they belong to.§type ProvenanceExtra = bool
type ProvenanceExtra = bool
When getting the AllocId of a pointer, some extra data is also obtained from the provenance
that is passed to memory access hooks so they can do things with it.
§type ExtraFnVal = !
type ExtraFnVal = !
Machines can define extra (non-instance) things that represent values of function pointers.
For example, Miri uses this to return a function pointer from
dlsym
that can later be called to execute the right thing.§type MemoryMap = IndexMap<AllocId, (MemoryKind<!>, Allocation), BuildHasherDefault<FxHasher>>
type MemoryMap = IndexMap<AllocId, (MemoryKind<!>, Allocation), BuildHasherDefault<FxHasher>>
Memory’s allocation map
source§const GLOBAL_KIND: Option<Self::MemoryKind> = None
const GLOBAL_KIND: Option<Self::MemoryKind> = None
The memory kind to use for copied global memory (held in
tcx
) –
or None if such memory should not be mutated and thus any such attempt will cause
a ModifiedStatic
error to be raised.
Statics are copied under two circumstances: When they are mutated, and when
adjust_allocation
(see below) returns an owned allocation
that is added to the memory so that the work is not done twice.§type AllocExtra = ()
type AllocExtra = ()
Extra data stored in every allocation.
§type FrameExtra = ()
type FrameExtra = ()
Extra data stored in every call frame.
source§fn ignore_optional_overflow_checks(_ecx: &InterpCx<'tcx, Self>) -> bool
fn ignore_optional_overflow_checks(_ecx: &InterpCx<'tcx, Self>) -> bool
Whether Assert(OverflowNeg) and Assert(Overflow) MIR terminators should actually
check for overflow.
source§fn unwind_terminate(
_ecx: &mut InterpCx<'tcx, Self>,
_reason: UnwindTerminateReason,
) -> InterpResult<'tcx>
fn unwind_terminate( _ecx: &mut InterpCx<'tcx, Self>, _reason: UnwindTerminateReason, ) -> InterpResult<'tcx>
Called when unwinding reached a state where execution should be terminated.
source§fn call_extra_fn(
_ecx: &mut InterpCx<'tcx, Self>,
fn_val: !,
_abi: CallAbi,
_args: &[FnArg<'tcx>],
_destination: &MPlaceTy<'tcx, Self::Provenance>,
_target: Option<BasicBlock>,
_unwind: UnwindAction,
) -> InterpResult<'tcx>
fn call_extra_fn( _ecx: &mut InterpCx<'tcx, Self>, fn_val: !, _abi: CallAbi, _args: &[FnArg<'tcx>], _destination: &MPlaceTy<'tcx, Self::Provenance>, _target: Option<BasicBlock>, _unwind: UnwindAction, ) -> InterpResult<'tcx>
Execute
fn_val
. It is the hook’s responsibility to advance the instruction
pointer as appropriate.source§fn adjust_global_allocation<'b>(
_ecx: &InterpCx<'tcx, Self>,
_id: AllocId,
alloc: &'b Allocation,
) -> InterpResult<'tcx, Cow<'b, Allocation<Self::Provenance>>>
fn adjust_global_allocation<'b>( _ecx: &InterpCx<'tcx, Self>, _id: AllocId, alloc: &'b Allocation, ) -> InterpResult<'tcx, Cow<'b, Allocation<Self::Provenance>>>
Called to adjust global allocations to the Provenance and AllocExtra of this machine. Read more
source§fn init_alloc_extra(
_ecx: &InterpCx<'tcx, Self>,
_id: AllocId,
_kind: MemoryKind<Self::MemoryKind>,
_size: Size,
_align: Align,
) -> InterpResult<'tcx, Self::AllocExtra>
fn init_alloc_extra( _ecx: &InterpCx<'tcx, Self>, _id: AllocId, _kind: MemoryKind<Self::MemoryKind>, _size: Size, _align: Align, ) -> InterpResult<'tcx, Self::AllocExtra>
Initialize the extra state of an allocation. Read more
source§fn extern_static_pointer(
ecx: &InterpCx<'tcx, Self>,
def_id: DefId,
) -> InterpResult<'tcx, Pointer>
fn extern_static_pointer( ecx: &InterpCx<'tcx, Self>, def_id: DefId, ) -> InterpResult<'tcx, Pointer>
Return the
AllocId
for the given extern static
.source§fn adjust_alloc_root_pointer(
_ecx: &InterpCx<'tcx, Self>,
ptr: Pointer<CtfeProvenance>,
_kind: Option<MemoryKind<Self::MemoryKind>>,
) -> InterpResult<'tcx, Pointer<CtfeProvenance>>
fn adjust_alloc_root_pointer( _ecx: &InterpCx<'tcx, Self>, ptr: Pointer<CtfeProvenance>, _kind: Option<MemoryKind<Self::MemoryKind>>, ) -> InterpResult<'tcx, Pointer<CtfeProvenance>>
Return a “root” pointer for the given allocation: the one that is used for direct
accesses to this static/const/fn allocation, or the one returned from the heap allocator. Read more
source§fn ptr_from_addr_cast(
_ecx: &InterpCx<'tcx, Self>,
addr: u64,
) -> InterpResult<'tcx, Pointer<Option<CtfeProvenance>>>
fn ptr_from_addr_cast( _ecx: &InterpCx<'tcx, Self>, addr: u64, ) -> InterpResult<'tcx, Pointer<Option<CtfeProvenance>>>
“Int-to-pointer cast”
source§fn ptr_get_alloc(
_ecx: &InterpCx<'tcx, Self>,
ptr: Pointer<CtfeProvenance>,
) -> Option<(AllocId, Size, Self::ProvenanceExtra)>
fn ptr_get_alloc( _ecx: &InterpCx<'tcx, Self>, ptr: Pointer<CtfeProvenance>, ) -> Option<(AllocId, Size, Self::ProvenanceExtra)>
Convert a pointer with provenance into an allocation-offset pair
and extra provenance info. Read more
§type MemoryKind = !
type MemoryKind = !
Additional memory kinds a machine wishes to distinguish from the builtin ones
source§const PANIC_ON_ALLOC_FAIL: bool = true
const PANIC_ON_ALLOC_FAIL: bool = true
Should the machine panic on allocation failures?
source§const ALL_CONSTS_ARE_PRECHECKED: bool = false
const ALL_CONSTS_ARE_PRECHECKED: bool = false
Determines whether
eval_mir_constant
can never fail because all required consts have
already been checked before.source§fn enforce_alignment(_ecx: &InterpCx<'tcx, Self>) -> bool
fn enforce_alignment(_ecx: &InterpCx<'tcx, Self>) -> bool
Whether memory accesses should be alignment-checked.
source§fn enforce_validity(
_ecx: &InterpCx<'tcx, Self>,
_layout: TyAndLayout<'tcx>,
) -> bool
fn enforce_validity( _ecx: &InterpCx<'tcx, Self>, _layout: TyAndLayout<'tcx>, ) -> bool
Whether to enforce the validity invariant for a specific layout.
source§fn before_access_global(
_tcx: TyCtxtAt<'tcx>,
_machine: &Self,
_alloc_id: AllocId,
alloc: ConstAllocation<'tcx>,
_static_def_id: Option<DefId>,
is_write: bool,
) -> InterpResult<'tcx>
fn before_access_global( _tcx: TyCtxtAt<'tcx>, _machine: &Self, _alloc_id: AllocId, alloc: ConstAllocation<'tcx>, _static_def_id: Option<DefId>, is_write: bool, ) -> InterpResult<'tcx>
Called before a global allocation is accessed.
def_id
is Some
if this is the “lazy” allocation of a static.source§fn find_mir_or_eval_fn(
_ecx: &mut InterpCx<'tcx, Self>,
_instance: Instance<'tcx>,
_abi: Abi,
_args: &[FnArg<'tcx, Self::Provenance>],
_destination: &MPlaceTy<'tcx, Self::Provenance>,
_target: Option<BasicBlock>,
_unwind: UnwindAction,
) -> InterpResult<'tcx, Option<(&'tcx Body<'tcx>, Instance<'tcx>)>>
fn find_mir_or_eval_fn( _ecx: &mut InterpCx<'tcx, Self>, _instance: Instance<'tcx>, _abi: Abi, _args: &[FnArg<'tcx, Self::Provenance>], _destination: &MPlaceTy<'tcx, Self::Provenance>, _target: Option<BasicBlock>, _unwind: UnwindAction, ) -> InterpResult<'tcx, Option<(&'tcx Body<'tcx>, Instance<'tcx>)>>
Entry point to all function calls. Read more
source§fn panic_nounwind(
_ecx: &mut InterpCx<'tcx, Self>,
_msg: &str,
) -> InterpResult<'tcx>
fn panic_nounwind( _ecx: &mut InterpCx<'tcx, Self>, _msg: &str, ) -> InterpResult<'tcx>
Called to trigger a non-unwinding panic.
source§fn call_intrinsic(
_ecx: &mut InterpCx<'tcx, Self>,
_instance: Instance<'tcx>,
_args: &[OpTy<'tcx, Self::Provenance>],
_destination: &MPlaceTy<'tcx, Self::Provenance>,
_target: Option<BasicBlock>,
_unwind: UnwindAction,
) -> InterpResult<'tcx, Option<Instance<'tcx>>>
fn call_intrinsic( _ecx: &mut InterpCx<'tcx, Self>, _instance: Instance<'tcx>, _args: &[OpTy<'tcx, Self::Provenance>], _destination: &MPlaceTy<'tcx, Self::Provenance>, _target: Option<BasicBlock>, _unwind: UnwindAction, ) -> InterpResult<'tcx, Option<Instance<'tcx>>>
Directly process an intrinsic without pushing a stack frame. It is the hook’s
responsibility to advance the instruction pointer as appropriate. Read more
source§fn assert_panic(
_ecx: &mut InterpCx<'tcx, Self>,
_msg: &AssertMessage<'tcx>,
_unwind: UnwindAction,
) -> InterpResult<'tcx>
fn assert_panic( _ecx: &mut InterpCx<'tcx, Self>, _msg: &AssertMessage<'tcx>, _unwind: UnwindAction, ) -> InterpResult<'tcx>
Called to evaluate
Assert
MIR terminators that trigger a panic.source§fn binary_ptr_op(
ecx: &InterpCx<'tcx, Self>,
bin_op: BinOp,
left: &ImmTy<'tcx, Self::Provenance>,
right: &ImmTy<'tcx, Self::Provenance>,
) -> InterpResult<'tcx, ImmTy<'tcx, Self::Provenance>>
fn binary_ptr_op( ecx: &InterpCx<'tcx, Self>, bin_op: BinOp, left: &ImmTy<'tcx, Self::Provenance>, right: &ImmTy<'tcx, Self::Provenance>, ) -> InterpResult<'tcx, ImmTy<'tcx, Self::Provenance>>
Called for all binary operations where the LHS has pointer type. Read more
source§fn expose_ptr(
_ecx: &mut InterpCx<'tcx, Self>,
_ptr: Pointer<Self::Provenance>,
) -> InterpResult<'tcx>
fn expose_ptr( _ecx: &mut InterpCx<'tcx, Self>, _ptr: Pointer<Self::Provenance>, ) -> InterpResult<'tcx>
Marks a pointer as exposed, allowing it’s provenance
to be recovered. “Pointer-to-int cast”
source§fn init_frame(
_ecx: &mut InterpCx<'tcx, Self>,
_frame: Frame<'tcx, Self::Provenance>,
) -> InterpResult<'tcx, Frame<'tcx, Self::Provenance, Self::FrameExtra>>
fn init_frame( _ecx: &mut InterpCx<'tcx, Self>, _frame: Frame<'tcx, Self::Provenance>, ) -> InterpResult<'tcx, Frame<'tcx, Self::Provenance, Self::FrameExtra>>
Called immediately before a new stack frame gets pushed.
source§fn stack<'a>(
_ecx: &'a InterpCx<'tcx, Self>,
) -> &'a [Frame<'tcx, Self::Provenance, Self::FrameExtra>]
fn stack<'a>( _ecx: &'a InterpCx<'tcx, Self>, ) -> &'a [Frame<'tcx, Self::Provenance, Self::FrameExtra>]
Borrow the current thread’s stack.
source§fn stack_mut<'a>(
_ecx: &'a mut InterpCx<'tcx, Self>,
) -> &'a mut Vec<Frame<'tcx, Self::Provenance, Self::FrameExtra>>
fn stack_mut<'a>( _ecx: &'a mut InterpCx<'tcx, Self>, ) -> &'a mut Vec<Frame<'tcx, Self::Provenance, Self::FrameExtra>>
Mutably borrow the current thread’s stack.
source§fn alignment_check(
_ecx: &InterpCx<'tcx, Self>,
_alloc_id: AllocId,
_alloc_align: Align,
_alloc_kind: AllocKind,
_offset: Size,
_align: Align,
) -> Option<Misalignment>
fn alignment_check( _ecx: &InterpCx<'tcx, Self>, _alloc_id: AllocId, _alloc_align: Align, _alloc_kind: AllocKind, _offset: Size, _align: Align, ) -> Option<Misalignment>
Gives the machine a chance to detect more misalignment than the built-in checks would catch.
source§fn enforce_abi(_ecx: &InterpCx<'tcx, Self>) -> bool
fn enforce_abi(_ecx: &InterpCx<'tcx, Self>) -> bool
Whether function calls should be ABI-checked.
source§fn load_mir(
ecx: &InterpCx<'tcx, Self>,
instance: InstanceKind<'tcx>,
) -> InterpResult<'tcx, &'tcx Body<'tcx>>
fn load_mir( ecx: &InterpCx<'tcx, Self>, instance: InstanceKind<'tcx>, ) -> InterpResult<'tcx, &'tcx Body<'tcx>>
Entry point for obtaining the MIR of anything that should get evaluated.
So not just functions and shims, but also const/static initializers, anonymous
constants, …
source§fn generate_nan<F1: Float + FloatConvert<F2>, F2: Float>(
_ecx: &InterpCx<'tcx, Self>,
_inputs: &[F1],
) -> F2
fn generate_nan<F1: Float + FloatConvert<F2>, F2: Float>( _ecx: &InterpCx<'tcx, Self>, _inputs: &[F1], ) -> F2
Generate the NaN returned by a float operation, given the list of inputs.
(This is all inputs, not just NaN inputs!)
source§fn before_terminator(_ecx: &mut InterpCx<'tcx, Self>) -> InterpResult<'tcx>
fn before_terminator(_ecx: &mut InterpCx<'tcx, Self>) -> InterpResult<'tcx>
Called before a basic block terminator is executed.
source§fn increment_const_eval_counter(
_ecx: &mut InterpCx<'tcx, Self>,
) -> InterpResult<'tcx>
fn increment_const_eval_counter( _ecx: &mut InterpCx<'tcx, Self>, ) -> InterpResult<'tcx>
Called when the interpreter encounters a
StatementKind::ConstEvalCounter
instruction.
You can use this to detect long or endlessly running programs.source§fn thread_local_static_pointer(
_ecx: &mut InterpCx<'tcx, Self>,
def_id: DefId,
) -> InterpResult<'tcx, Pointer<Self::Provenance>>
fn thread_local_static_pointer( _ecx: &mut InterpCx<'tcx, Self>, def_id: DefId, ) -> InterpResult<'tcx, Pointer<Self::Provenance>>
Return the
AllocId
for the given thread-local static in the current thread.source§fn eval_inline_asm(
_ecx: &mut InterpCx<'tcx, Self>,
_template: &'tcx [InlineAsmTemplatePiece],
_operands: &[InlineAsmOperand<'tcx>],
_options: InlineAsmOptions,
_targets: &[BasicBlock],
) -> InterpResult<'tcx>
fn eval_inline_asm( _ecx: &mut InterpCx<'tcx, Self>, _template: &'tcx [InlineAsmTemplatePiece], _operands: &[InlineAsmOperand<'tcx>], _options: InlineAsmOptions, _targets: &[BasicBlock], ) -> InterpResult<'tcx>
Evaluate the inline assembly. Read more
source§fn before_memory_read(
_tcx: TyCtxtAt<'tcx>,
_machine: &Self,
_alloc_extra: &Self::AllocExtra,
_prov: (AllocId, Self::ProvenanceExtra),
_range: AllocRange,
) -> InterpResult<'tcx>
fn before_memory_read( _tcx: TyCtxtAt<'tcx>, _machine: &Self, _alloc_extra: &Self::AllocExtra, _prov: (AllocId, Self::ProvenanceExtra), _range: AllocRange, ) -> InterpResult<'tcx>
Hook for performing extra checks on a memory read access. Read more
source§fn before_alloc_read(
_ecx: &InterpCx<'tcx, Self>,
_alloc_id: AllocId,
) -> InterpResult<'tcx>
fn before_alloc_read( _ecx: &InterpCx<'tcx, Self>, _alloc_id: AllocId, ) -> InterpResult<'tcx>
Hook for performing extra checks on any memory read access,
that involves an allocation, even ZST reads. Read more
source§fn before_memory_write(
_tcx: TyCtxtAt<'tcx>,
_machine: &mut Self,
_alloc_extra: &mut Self::AllocExtra,
_prov: (AllocId, Self::ProvenanceExtra),
_range: AllocRange,
) -> InterpResult<'tcx>
fn before_memory_write( _tcx: TyCtxtAt<'tcx>, _machine: &mut Self, _alloc_extra: &mut Self::AllocExtra, _prov: (AllocId, Self::ProvenanceExtra), _range: AllocRange, ) -> InterpResult<'tcx>
Hook for performing extra checks on a memory write access.
This is not invoked for ZST accesses, as no write actually happens.
source§fn before_memory_deallocation(
_tcx: TyCtxtAt<'tcx>,
_machine: &mut Self,
_alloc_extra: &mut Self::AllocExtra,
_prov: (AllocId, Self::ProvenanceExtra),
_size: Size,
_align: Align,
_kind: MemoryKind<Self::MemoryKind>,
) -> InterpResult<'tcx>
fn before_memory_deallocation( _tcx: TyCtxtAt<'tcx>, _machine: &mut Self, _alloc_extra: &mut Self::AllocExtra, _prov: (AllocId, Self::ProvenanceExtra), _size: Size, _align: Align, _kind: MemoryKind<Self::MemoryKind>, ) -> InterpResult<'tcx>
Hook for performing extra operations on a memory deallocation.
source§fn retag_ptr_value(
_ecx: &mut InterpCx<'tcx, Self>,
_kind: RetagKind,
val: &ImmTy<'tcx, Self::Provenance>,
) -> InterpResult<'tcx, ImmTy<'tcx, Self::Provenance>>
fn retag_ptr_value( _ecx: &mut InterpCx<'tcx, Self>, _kind: RetagKind, val: &ImmTy<'tcx, Self::Provenance>, ) -> InterpResult<'tcx, ImmTy<'tcx, Self::Provenance>>
Executes a retagging operation for a single pointer.
Returns the possibly adjusted pointer.
source§fn retag_place_contents(
_ecx: &mut InterpCx<'tcx, Self>,
_kind: RetagKind,
_place: &PlaceTy<'tcx, Self::Provenance>,
) -> InterpResult<'tcx>
fn retag_place_contents( _ecx: &mut InterpCx<'tcx, Self>, _kind: RetagKind, _place: &PlaceTy<'tcx, Self::Provenance>, ) -> InterpResult<'tcx>
Executes a retagging operation on a compound value.
Replaces all pointers stored in the given place.
source§fn protect_in_place_function_argument(
ecx: &mut InterpCx<'tcx, Self>,
mplace: &MPlaceTy<'tcx, Self::Provenance>,
) -> InterpResult<'tcx>
fn protect_in_place_function_argument( ecx: &mut InterpCx<'tcx, Self>, mplace: &MPlaceTy<'tcx, Self::Provenance>, ) -> InterpResult<'tcx>
Called on places used for in-place function argument and return value handling. Read more
source§fn after_stack_push(_ecx: &mut InterpCx<'tcx, Self>) -> InterpResult<'tcx>
fn after_stack_push(_ecx: &mut InterpCx<'tcx, Self>) -> InterpResult<'tcx>
Called immediately after a stack frame got pushed and its locals got initialized.
source§fn before_stack_pop(
_ecx: &InterpCx<'tcx, Self>,
_frame: &Frame<'tcx, Self::Provenance, Self::FrameExtra>,
) -> InterpResult<'tcx>
fn before_stack_pop( _ecx: &InterpCx<'tcx, Self>, _frame: &Frame<'tcx, Self::Provenance, Self::FrameExtra>, ) -> InterpResult<'tcx>
Called just before the return value is copied to the caller-provided return place.
source§fn after_stack_pop(
_ecx: &mut InterpCx<'tcx, Self>,
_frame: Frame<'tcx, Self::Provenance, Self::FrameExtra>,
unwinding: bool,
) -> InterpResult<'tcx, ReturnAction>
fn after_stack_pop( _ecx: &mut InterpCx<'tcx, Self>, _frame: Frame<'tcx, Self::Provenance, Self::FrameExtra>, unwinding: bool, ) -> InterpResult<'tcx, ReturnAction>
Called immediately after a stack frame got popped, but before jumping back to the caller.
The
locals
have already been destroyed!source§fn after_local_allocated(
_ecx: &mut InterpCx<'tcx, Self>,
_local: Local,
_mplace: &MPlaceTy<'tcx, Self::Provenance>,
) -> InterpResult<'tcx>
fn after_local_allocated( _ecx: &mut InterpCx<'tcx, Self>, _local: Local, _mplace: &MPlaceTy<'tcx, Self::Provenance>, ) -> InterpResult<'tcx>
Called immediately after actual memory was allocated for a local
but before the local’s stack frame is updated to point to that memory.
source§fn eval_mir_constant<F>(
ecx: &InterpCx<'tcx, Self>,
val: Const<'tcx>,
span: Span,
layout: Option<TyAndLayout<'tcx>>,
eval: F,
) -> InterpResult<'tcx, OpTy<'tcx, Self::Provenance>>where
F: Fn(&InterpCx<'tcx, Self>, Const<'tcx>, Span, Option<TyAndLayout<'tcx>>) -> InterpResult<'tcx, OpTy<'tcx, Self::Provenance>>,
fn eval_mir_constant<F>(
ecx: &InterpCx<'tcx, Self>,
val: Const<'tcx>,
span: Span,
layout: Option<TyAndLayout<'tcx>>,
eval: F,
) -> InterpResult<'tcx, OpTy<'tcx, Self::Provenance>>where
F: Fn(&InterpCx<'tcx, Self>, Const<'tcx>, Span, Option<TyAndLayout<'tcx>>) -> InterpResult<'tcx, OpTy<'tcx, Self::Provenance>>,
Evaluate the given constant. The
eval
function will do all the required evaluation,
but this hook has the chance to do some pre/postprocessing.Auto Trait Implementations§
impl DynSend for DummyMachine
impl DynSync for DummyMachine
impl Freeze for DummyMachine
impl RefUnwindSafe for DummyMachine
impl Send for DummyMachine
impl Sync for DummyMachine
impl Unpin for DummyMachine
impl UnwindSafe for DummyMachine
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
source§impl<T, R> CollectAndApply<T, R> for T
impl<T, R> CollectAndApply<T, R> for T
source§impl<T> Filterable for T
impl<T> Filterable for T
source§fn filterable(
self,
filter_name: &'static str,
) -> RequestFilterDataProvider<T, fn(_: DataRequest<'_>) -> bool>
fn filterable( self, filter_name: &'static str, ) -> RequestFilterDataProvider<T, fn(_: DataRequest<'_>) -> bool>
Creates a filterable data provider with the given name for debugging. Read more
source§impl<T> Instrument for T
impl<T> Instrument for T
source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
source§impl<T> IntoEither for T
impl<T> IntoEither for T
source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
Converts
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moresource§impl<P> IntoQueryParam<P> for P
impl<P> IntoQueryParam<P> for P
fn into_query_param(self) -> P
source§impl<T> MaybeResult<T> for T
impl<T> MaybeResult<T> for T
source§impl<T> Pointable for T
impl<T> Pointable for T
source§impl<I, T, U> Upcast<I, U> for Twhere
U: UpcastFrom<I, T>,
impl<I, T, U> Upcast<I, U> for Twhere
U: UpcastFrom<I, T>,
source§impl<I, T> UpcastFrom<I, T> for T
impl<I, T> UpcastFrom<I, T> for T
fn upcast_from(from: T, _tcx: I) -> T
source§impl<Tcx, T> Value<Tcx> for Twhere
Tcx: DepContext,
impl<Tcx, T> Value<Tcx> for Twhere
Tcx: DepContext,
default fn from_cycle_error( tcx: Tcx, cycle_error: &CycleError, _guar: ErrorGuaranteed, ) -> T
source§impl<T> WithSubscriber for T
impl<T> WithSubscriber for T
source§fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
fn with_subscriber<S>(self, subscriber: S) -> WithDispatch<Self>
source§fn with_current_subscriber(self) -> WithDispatch<Self>
fn with_current_subscriber(self) -> WithDispatch<Self>
impl<'a, T> Captures<'a> for Twhere
T: ?Sized,
impl<T> ErasedDestructor for Twhere
T: 'static,
impl<T> MaybeSendSync for T
Layout§
Note: Most layout information is completely unstable and may even differ between compilations. The only exception is types with certain repr(...)
attributes. Please see the Rust Reference's “Type Layout” chapter for details on type layout guarantees.
Size: 0 bytes