-
Notifications
You must be signed in to change notification settings - Fork 102
/
i915.txt
244 lines (198 loc) · 10.9 KB
/
i915.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
\section{i915}
display device 添加的逻辑:
在finishinit()里,对每个ddi (abcde)
先获得一个没占用的pipe 共3个:abc
然后根据这个ddi支持的设备类型,生成一个dp或者hdmi、
应该是有显示器连接的ddi才会成功创建设备
display controller framebuffer的设置:
在gtt里分配一个region,然后配置好从该region的base到该vmo的映射。
region的base实际上是从global gtt(mmadr + 8MB)开始计算的offset. 并不是一个实际的虚拟地址。
然后设置surface的base address为该region的base.
v3 gpu overview page 6 VGA and Extended VGA I/O and Memory Register Map
这里的地址port io和mmio都有
intel-gfx-prm-osrc-kbl-vol05-memory_views.pdf
终于找到了:
This register requests allocation for the combined Graphics Translation Table Modification Range and Memory
Mapped Range. The range requires 16 MB combined for MMIO and Global GTT aperture, with 2MB of that used
by MMIO, 6MB reserved, and 8MB used by GTT. GTTADR will begin at (GTTMMADR + 8 MB) while the MMIO
base address will be the same as GTTMMADR. The region between (GTTMMADR + 2MB) - (GTTMMADR + 8MB)
is reserved. For the Global GTT, this range is defined as a memory BAR in graphics device config space. It is an
alias into which software is required to write Page Table Entry values (PTEs). Software may read PTE values from
the global Graphics Translation Table (GTT). PTEs cannot be written directly into the global GTT memory area.
The device snoops writes to this region in order to invalidate any cached translations within the various TLB's
implemented on-chip. The allocation is for 16MB and the base address is defined by bits [38:24]. Note: Per PCI
enumeration requirements, to determine the size of a BAR software should write all 1s to the BAR, read it back
and see how many of the lower bits read as 0 (meaning that they didn't take the 1s). This indicates the size of the
BAR. In order for this to work bits 63 down to the size of the BAR need to be writable to 1s.
intel graphics有4个内存区域
1. pci config 256 bytes
mmadr (offset 0x10)指向mmio 老的型号上mmadr的offset是0x14
asl storage (offset 0xfc)指向OpRegion
在intel atom processor e6xx series datasheet里有详细的pci configuration的布局
global gtt的地址是mmadr + 8MB
2. mmio 的寄存器区域 512kb
这个参考PRM
3. OpRegion
这个的具体布局没有找到文档,只能参考linux kernel source code.
4. framebuffer
有2个寄存器都指向fb
gmadr (pci config offset 0x18 bar 2)
bdsm (pci config offset 0x5c)
pci_config_read读的是pci配置区域的值
mmio_space是i915 mmio的虚拟地址
linux里的intel_vbt_defs.h:
child_device_config 对应于zircon里的ddi_config
bdb_general_definitions <=> general_definitions
bdb_lvds_options <=> lvds_config
ACPI integrated graphics device opregion specification. (pdf)
ASL storage register. This register is a software scratch register used by the
system BIOS to communicate the OpRegion base address to the graphics driver.
\begin{verbatim}
intel_i915_bind()
Controller::Bind()
pci_config_read16(&pci_, PCI_CONFIG_DEVICE_ID, &device_id_);
获得device id
igd_opregion_.Init(&pci_);
pci_config_read32(pci, kIgdOpRegionAddrReg, &igd_addr);
从pci配置里得到opregion的地址
zx_vmo_create_physical(get_root_resource(),
igd_addr & ~(PAGE_SIZE - 1),
igd_opregion_pages_len_, &vmo);
VmObjectPhysical::Create(paddr, size, &vmo);
VmObjectPhysical(base, size)
zx::vmar::root_self()->map(0, igd_opregion_pages_, 0, igd_opregion_pages_len_,
ZX_VM_FLAG_PERM_READ | ZX_VM_FLAG_PERM_WRITE,
&igd_opregion_pages_base_);
vmar->Map(vmar_offset, vmo->vmo(), vmo_offset, len, map_flags, &vm_mapping);
vmar_->CreateVmMapping(vmar_offset, len, /* align_pow2 */ 0,
vmar_flags, fbl::move(vmo), vmo_offset,
arch_mmu_flags, "useralloc",
&result);
CreateSubVmarInternal(mapping_offset, size, align_pow2, vmar_flags, fbl::move(vmo),
vmo_offset, arch_mmu_flags, name, &res);
VmMapping(*this, new_base, size, vmar_flags,
fbl::move(vmo), vmo_offset, arch_mmu_flags));
vmar->Activate();
igd_opregion_pages_base_存放了opregion映射后的虚拟地址
vbt_header = reinterpret_cast<vbt_header_t*>(&igd_opregion_->mailbox4);
拿到vbt, vbt的详细信息文档里没有。都在linux kernel里。
拿到bios data block header,这个文档里没写,在linux kernel源码里
ProcessDdiConfigs()
ddi_config_size是每个entry的size
记录hdmi, dp等port的信息
CheckForLowVoltageEdp(pci)
GetPanelType(pci, &panel_type_)
Swsci(pci, SciEntryParam::kFuncGetBiosData, SciEntryParam::kGbdaSupportedCalls,
0 /* unused additional_param */, &exit_param, &additional_res)
ProcessBacklightData();
MapPciMmio(0u, ®s, &size);
pci_map_bar(&pci_, pci_bar, ZX_CACHE_POLICY_UNCACHED_DEVICE,
&mapped_bars_[pci_bar].base,
&mapped_bars_[pci_bar].size,
&mapped_bars_[pci_bar].vmo);
pci->ops->map_bar(pci->ctx, bar_id, cache_policy, vaddr, size, out_handle);
pci_op_map_bar(void* ctx, // proxy.c
pci_op_get_bar(ctx, bar_id, &bar);
pci_rpc_request(dev, PCI_OP_GET_BAR, &handle, &req, &resp);
// 向kpci进程请求BAR, 在另一个进程里:
kpci_get_bar(pci_msg_t* req, kpci_device_t* device, zx_handle_t ch)
zx_pci_get_bar(device->handle, req->bar.id, &bar, &handle);
// 进入kernel
pci_device->GetBar(bar_num);
device_->GetBarInfo(bar_num);
pcie_bar_info_t* ret = &bars_[bar_ndx];
这个在之前PcieDevice::ProbeBarLocked(uint bar_id)里获得
zx_vmar_map
之前拿到phsical vmo, 现在映射一个虚拟地址给它。
interrupts_.Init(this);
pci_query_irq_mode(controller_->pci(), ZX_PCIE_IRQ_MODE_LEGACY, &irq_cnt);
legacy mode: irq_cnt = 1
pci_map_interrupt
拿到一个这个设备的interrupt dispatcher, 之后的irq thread就可以去wait它了。
sys_pci_map_interrupt
pci_device->MapInterrupt(which_irq, &interrupt_dispatcher, &rights);
PciInterruptDispatcher::Create(device_,
which_irq,
irqs_maskable_,
rights,
interrupt_dispatcher);
interrupt_dispatcher->RegisterInterruptHandler();
device_->RegisterIrqHandler(vector_, IrqThunk, this);
gtt_.Init(this)
pci_get_bti(controller->pci(), 0, bti_.reset_and_get_address())
kpci_get_bti(pci_msg_t* req, kpci_device_t* device, zx_handle_t ch)
pciroot_get_bti(&device->pciroot, bdf, req->bti_index, &bti)
pciroot_op_get_bti(void* context, uint32_t bdf, uint32_t index, zx_handle_t* bti) // bus-apic.c
iommu_manager_iommu_for_bdf(bdf, &iommu_handle);
*iommu_h = iommu_mgr.dummy_iommu;
用dummy iommu
zx_bti_create(iommu_handle, 0, bdf, bti)
BusTransactionInitiatorDispatcher::Create(iommu_dispatcher->iommu(), bti_id,
&dispatcher, &rights);
bti_.get_info(ZX_INFO_BTI, &info, sizeof(zx_info_bti_t), nullptr, nullptr);
sys_object_get_info
从dummy iommu里得到信息
bti_.pin(ZX_BTI_PERM_READ, scratch_buffer_, 0, PAGE_SIZE, &scratch_buffer_paddr_, 1,
&scratch_buffer_pmt_);
bti_dispatcher->Pin(vmo_dispatcher->vmo(), offset, size, iommu_perms, &new_pmt,
&new_pmt_rights);
PinnedMemoryTokenDispatcher::Create(fbl::WrapRefPtr(this), fbl::move(vmo),
offset, size, perms, pmt, pmt_rights);
vmo->CommitRange(offset, size, nullptr);
vmo->Pin(offset, size);
这里的大意就是:bti获得了pinned内存,并且在iommu里建立好了映射
因为用的是dummy iommu, 所以获得的设备虚拟地址就是物理地址
gen_pte_encode(scratch_buffer_paddr_);
DdkAdd("intel_i915");
AddProtocol()会添加DisplayControllerProtocol ddk_proto_id_ = ZX_PROTOCOL_DISPLAY_CONTROLLER_IMPL;
这个会引起display controller驱动的加载, 转display-controller.txt
device_add(parent_, &args, &zxdev_);
新线程:
Controller::FinishInit()
InitDisplays();
BringUpDisplayEngine(false);
Enable PCH Reset Handshake
Wait for Power Well 0 distribution
InitDisplay(registers::kDdis[i])
拿一个没被使用的pipe
pipe = registers::PIPE_A
AddDisplay(fbl::move(disp_device));
ReallocatePipeBuffers(false);
回调display controller注册的callback
interrupts_.FinishInit();
===========
pci设备中断初始化:
acpi_drv_create
get_pci_init_arg(&arg, &arg_size);
find_pcie_config(res);
sys_pci_init()
configure_interrupt(irq, tm, pol);
apic_io_configure_irq(
vector,
tm,
pol,
DELIVERY_MODE_FIXED,
IO_APIC_IRQ_MASK,
DST_MODE_PHYSICAL,
apic_bsp_id(),
0);
这里其实是把所有pci设备对应的global irq 从apic里mask掉
PcieDevice::InitLegacyIrqStateLocked(PcieUpstreamNode& upstream)
irq_.legacy.pin = cfg_->Read(PciConfig::kInterruptPin);
MapPinToIrqLocked(fbl::RefPtr<PcieUpstreamNode>(&upstream));
irq_.legacy.shared_handler = bus_drv_.FindLegacyIrqHandler(irq_.legacy.irq_id);
多个设备可能会使用相同的global irq
SharedLegacyIrqHandler::Create(irq_id);
SharedLegacyIrqHandler
register_int_handler(irq_id_, HandlerThunk, this);
x86_vector = apic_io_fetch_irq_vector(vector)
如果当前的global irq没有对应到cpu的vector上,则分配一个vector
p2ra_allocate_range(&x86_irq_vector_allocator, 1, &range_start);
====
garnet里应该不是用applyconfig来更新plane surface的base address
Controller::ApplyConfig
Controller::ApplyConfiguration
DisplayDevice::ApplyConfiguration
DisplayDevice::ConfigurePrimaryPlane
plane_surface.set_surface_base_addr
\end{verbatim}