1 # SPDX-License-Identifier: GPL-2.0+
2 # Copyright (c) 2018 Google, Inc
3 # Written by Simon Glass <sjg@chromium.org>
5 """Entry-type module for sections (groups of entries)
7 Sections are entries which can contain other entries. This allows hierarchical
11 from collections import OrderedDict
12 import concurrent.futures
16 from binman.entry import Entry
17 from binman import state
18 from dtoc import fdt_util
19 from u_boot_pylib import tools
20 from u_boot_pylib import tout
21 from u_boot_pylib.tools import to_hex_size
24 class Entry_section(Entry):
25 """Entry that contains other entries
27 A section is an entry which can contain other entries, thus allowing
28 hierarchical images to be created. See 'Sections and hierarchical images'
29 in the binman README for more information.
31 The base implementation simply joins the various entries together, using
32 various rules about alignment, etc.
37 This class can be subclassed to support other file formats which hold
38 multiple entries, such as CBFS. To do this, override the following
39 functions. The documentation here describes what your function should do.
40 For example code, see etypes which subclass `Entry_section`, or `cbfs.py`
41 for a more involved example::
43 $ grep -l \\(Entry_section tools/binman/etype/*.py
46 Call `super().ReadNode()`, then read any special properties for the
47 section. Then call `self.ReadEntries()` to read the entries.
49 Binman calls this at the start when reading the image description.
52 Read in the subnodes of the section. This may involve creating entries
53 of a particular etype automatically, as well as reading any special
54 properties in the entries. For each entry, entry.ReadNode() should be
55 called, to read the basic entry properties. The properties should be
56 added to `self._entries[]`, in the correct order, with a suitable name.
58 Binman calls this at the start when reading the image description.
60 BuildSectionData(required)
61 Create the custom file format that you want and return it as bytes.
62 This likely sets up a file header, then loops through the entries,
63 adding them to the file. For each entry, call `entry.GetData()` to
64 obtain the data. If that returns None, and `required` is False, then
65 this method must give up and return None. But if `required` is True then
66 it should assume that all data is valid.
68 Binman calls this when packing the image, to find out the size of
69 everything. It is called again at the end when building the final image.
71 SetImagePos(image_pos):
72 Call `super().SetImagePos(image_pos)`, then set the `image_pos` values
73 for each of the entries. This should use the custom file format to find
74 the `start offset` (and `image_pos`) of each entry. If the file format
75 uses compression in such a way that there is no offset available (other
76 than reading the whole file and decompressing it), then the offsets for
77 affected entries can remain unset (`None`). The size should also be set
80 Binman calls this after the image has been packed, to update the
81 location that all the entries ended up at.
83 ReadChildData(child, decomp, alt_format):
84 The default version of this may be good enough, if you are able to
85 implement SetImagePos() correctly. But that is a bit of a bypass, so
86 you can override this method to read from your custom file format. It
87 should read the entire entry containing the custom file using
88 `super().ReadData(True)`, then parse the file to get the data for the
89 given child, then return that data.
91 If your file format supports compression, the `decomp` argument tells
92 you whether to return the compressed data (`decomp` is False) or to
93 uncompress it first, then return the uncompressed data (`decomp` is
94 True). This is used by the `binman extract -U` option.
96 If your entry supports alternative formats, the alt_format provides the
97 alternative format that the user has selected. Your function should
98 return data in that format. This is used by the 'binman extract -l'
101 Binman calls this when reading in an image, in order to populate all the
102 entries with the data from that image (`binman ls`).
104 WriteChildData(child):
105 Binman calls this after `child.data` is updated, to inform the custom
106 file format about this, in case it needs to do updates.
108 The default version of this does nothing and probably needs to be
109 overridden for the 'binman replace' command to work. Your version should
110 use `child.data` to update the data for that child in the custom file
113 Binman calls this when updating an image that has been read in and in
114 particular to update the data for a particular entry (`binman replace`)
116 Properties / Entry arguments
117 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
119 See :ref:`develop/package/binman:Image description format` for more
123 Default alignment for this section, if no alignment is given in the
127 Pad byte to use when padding
130 True if entries should be sorted by offset, False if they must be
131 in-order in the device tree description
134 Used to build an x86 ROM which ends at 4GB (2^32)
137 Adds a prefix to the name of every entry in the section when writing out
141 Number of bytes before the first entry starts. These effectively adjust
142 the starting offset of entries. For example, if this is 16, then the
143 first entry would start at 16. An entry with offset = 20 would in fact
144 be written at offset 4 in the image file, since the first 16 bytes are
145 skipped when writing.
148 filename to write the unpadded section contents to within the output
149 directory (None to skip this).
151 Since a section is also an entry, it inherits all the properies of entries
154 Note that the `allow_missing` member controls whether this section permits
155 external blobs to be missing their contents. The option will produce an
156 image but of course it will not work. It is useful to make sure that
157 Continuous Integration systems can build without the binaries being
158 available. This is set by the `SetAllowMissing()` method, if
159 `--allow-missing` is passed to binman.
161 def __init__(self, section, etype, node, test=False):
163 super().__init__(section, etype, node)
164 self._entries = OrderedDict()
167 self._skip_at_start = None
168 self._end_4gb = False
169 self._ignore_missing = False
170 self._filename = None
171 self.align_default = 0
173 def IsSpecialSubnode(self, node):
174 """Check if a node is a special one used by the section itself
176 Some nodes are used for hashing / signatures and do not add entries to
180 bool: True if the node is a special one, else False
182 start_list = ('cipher', 'hash', 'signature', 'template')
183 return any(node.name.startswith(name) for name in start_list)
186 """Read properties from the section node"""
188 self._pad_byte = fdt_util.GetInt(self._node, 'pad-byte', 0)
189 self._sort = fdt_util.GetBool(self._node, 'sort-by-offset')
190 self._end_4gb = fdt_util.GetBool(self._node, 'end-at-4gb')
191 self._skip_at_start = fdt_util.GetInt(self._node, 'skip-at-start')
194 self.Raise("Section size must be provided when using end-at-4gb")
195 if self._skip_at_start is not None:
196 self.Raise("Provide either 'end-at-4gb' or 'skip-at-start'")
198 self._skip_at_start = 0x100000000 - self.size
200 if self._skip_at_start is None:
201 self._skip_at_start = 0
202 self._name_prefix = fdt_util.GetString(self._node, 'name-prefix')
203 self.align_default = fdt_util.GetInt(self._node, 'align-default', 0)
204 self._filename = fdt_util.GetString(self._node, 'filename',
209 def ReadEntries(self):
210 for node in self._node.subnodes:
211 if self.IsSpecialSubnode(node):
213 entry = Entry.Create(self, node,
214 expanded=self.GetImage().use_expanded,
215 missing_etype=self.GetImage().missing_etype)
217 entry.SetPrefix(self._name_prefix)
218 self._entries[node.name] = entry
220 def _Raise(self, msg):
221 """Raises an error for this section
224 msg (str): Error message to use in the raise string
228 raise ValueError("Section '%s': %s" % (self._node.path, msg))
232 for entry in self._entries.values():
233 fdts.update(entry.GetFdts())
236 def ProcessFdt(self, fdt):
237 """Allow entries to adjust the device tree
239 Some entries need to adjust the device tree for their purposes. This
240 may involve adding or deleting properties.
242 todo = self._entries.values()
243 for passnum in range(3):
246 if not entry.ProcessFdt(fdt):
247 next_todo.append(entry)
252 self.Raise('Internal error: Could not complete processing of Fdt: remaining %s' %
256 def gen_entries(self):
257 super().gen_entries()
258 for entry in self._entries.values():
261 def AddMissingProperties(self, have_image_pos):
262 """Add new properties to the device tree as needed for this entry"""
263 super().AddMissingProperties(have_image_pos)
264 if self.compress != 'none':
265 have_image_pos = False
266 for entry in self._entries.values():
267 entry.AddMissingProperties(have_image_pos)
269 def ObtainContents(self, fake_size=0, skip_entry=None):
270 return self.GetEntryContents(skip_entry=skip_entry)
272 def GetPaddedDataForEntry(self, entry, entry_data):
273 """Get the data for an entry including any padding
275 Gets the entry data and uses the section pad-byte value to add padding
276 before and after as defined by the pad-before and pad-after properties.
277 This does not consider alignment.
280 entry: Entry to check
281 entry_data: Data for the entry, False if is null
284 Contents of the entry along with any pad bytes before and
287 pad_byte = (entry._pad_byte if isinstance(entry, Entry_section)
291 # Handle padding before the entry
293 data += tools.get_bytes(self._pad_byte, entry.pad_before)
295 # Add in the actual entry data
298 # Handle padding after the entry
300 data += tools.get_bytes(self._pad_byte, entry.pad_after)
303 data += tools.get_bytes(pad_byte, entry.size - len(data))
305 self.Detail('GetPaddedDataForEntry: size %s' % to_hex_size(self.data))
309 def BuildSectionData(self, required):
310 """Build the contents of a section
312 This places all entries at the right place, dealing with padding before
313 and after entries. It does not do padding for the section itself (the
314 pad-before and pad-after properties in the section items) since that is
315 handled by the parent section.
317 This should be overridden by subclasses which want to build their own
318 data structure for the section.
320 Missing entries will have be given empty (or fake) data, so are
321 processed normally here.
324 required: True if the data must be present, False if it is OK to
328 Contents of the section (bytes), None if not available
330 section_data = bytearray()
332 for entry in self._entries.values():
333 entry_data = entry.GetData(required)
335 # This can happen when this section is referenced from a collection
336 # earlier in the image description. See testCollectionSection().
337 if not required and entry_data is None:
340 entry_data_final = entry_data
341 if entry_data is None:
342 pad_byte = (entry._pad_byte if isinstance(entry, Entry_section)
344 entry_data_final = tools.get_bytes(self._pad_byte, entry.size)
346 data = self.GetPaddedDataForEntry(entry, entry_data_final)
347 # Handle empty space before the entry
348 pad = (entry.offset or 0) - self._skip_at_start - len(section_data)
350 section_data += tools.get_bytes(self._pad_byte, pad)
352 # Add in the actual entry data
354 end_offset = entry.offset + entry.size
355 if end_offset > len(section_data):
356 entry.Raise("Offset %#x (%d) ending at %#x (%d) must overlap with existing entries" %
357 (entry.offset, entry.offset, end_offset,
359 # Don't write anything for null entries'
360 if entry_data is not None:
361 section_data = (section_data[:entry.offset] + data +
362 section_data[entry.offset + entry.size:])
366 self.Detail('GetData: %d entries, total size %#x' %
367 (len(self._entries), len(section_data)))
368 return self.CompressData(section_data)
370 def GetPaddedData(self, data=None):
371 """Get the data for a section including any padding
373 Gets the section data and uses the parent section's pad-byte value to
374 add padding before and after as defined by the pad-before and pad-after
375 properties. If this is a top-level section (i.e. an image), this is the
376 same as GetData(), since padding is not supported.
378 This does not consider alignment.
381 Contents of the section along with any pad bytes before and
384 section = self.section or self
386 data = self.GetData()
387 return section.GetPaddedDataForEntry(self, data)
389 def GetData(self, required=True):
390 """Get the contents of an entry
392 This builds the contents of the section, stores this as the contents of
393 the section and returns it. If the section has a filename, the data is
397 required: True if the data must be present, False if it is OK to
401 bytes content of the section, made up for all all of its subentries.
402 This excludes any padding. If the section is compressed, the
403 compressed data is returned
405 if not self.build_done:
406 data = self.BuildSectionData(required)
409 self.SetContents(data)
413 tools.write_file(tools.get_output_filename(self._filename), data)
416 def GetOffsets(self):
417 """Handle entries that want to set the offset/size of other entries
419 This calls each entry's GetOffsets() method. If it returns a list
420 of entries to update, it updates them.
422 self.GetEntryOffsets()
425 def ResetForPack(self):
426 """Reset offset/size fields so that packing can be done again"""
427 super().ResetForPack()
428 for entry in self._entries.values():
431 def Pack(self, offset):
432 """Pack all entries into the section"""
436 self._extend_entries()
441 data = self.BuildSectionData(True)
442 self.SetContents(data)
446 offset = super().Pack(offset)
450 def _PackEntries(self):
451 """Pack all entries into the section"""
452 offset = self._skip_at_start
453 for entry in self._entries.values():
454 offset = entry.Pack(offset)
457 def _extend_entries(self):
458 """Extend any entries that are permitted to"""
460 for entry in self._entries.values():
462 exp_entry.extend_to_limit(entry.offset)
464 if entry.extend_size:
467 exp_entry.extend_to_limit(self.size)
469 def _SortEntries(self):
470 """Sort entries by offset"""
471 entries = sorted(self._entries.values(), key=lambda entry: entry.offset)
472 self._entries.clear()
473 for entry in entries:
474 self._entries[entry._node.name] = entry
476 def CheckEntries(self):
477 """Check that entries do not overlap or extend outside the section"""
478 max_size = self.size if self.uncomp_size is None else self.uncomp_size
482 for entry in self._entries.values():
484 if (entry.offset < self._skip_at_start or
485 entry.offset + entry.size > self._skip_at_start +
487 entry.Raise('Offset %#x (%d) size %#x (%d) is outside the '
488 "section '%s' starting at %#x (%d) "
490 (entry.offset, entry.offset, entry.size, entry.size,
491 self._node.path, self._skip_at_start,
492 self._skip_at_start, max_size, max_size))
493 if not entry.overlap:
494 if entry.offset < offset and entry.size:
495 entry.Raise("Offset %#x (%d) overlaps with previous entry '%s' ending at %#x (%d)" %
496 (entry.offset, entry.offset, prev_name, offset,
498 offset = entry.offset + entry.size
499 prev_name = entry.GetPath()
501 def WriteSymbols(self, section):
502 """Write symbol values into binary files for access at run time"""
503 for entry in self._entries.values():
504 entry.WriteSymbols(self)
506 def SetCalculatedProperties(self):
507 super().SetCalculatedProperties()
508 for entry in self._entries.values():
509 entry.SetCalculatedProperties()
511 def SetImagePos(self, image_pos):
512 super().SetImagePos(image_pos)
513 if self.compress == 'none':
514 for entry in self._entries.values():
515 entry.SetImagePos(image_pos + self.offset)
517 def ProcessContents(self):
518 sizes_ok_base = super(Entry_section, self).ProcessContents()
520 for entry in self._entries.values():
521 if not entry.ProcessContents():
523 return sizes_ok and sizes_ok_base
525 def WriteMap(self, fd, indent):
526 """Write a map of the section to a .map file
529 fd: File to write the map to
531 Entry.WriteMapLine(fd, indent, self.name, self.offset or 0,
532 self.size, self.image_pos)
533 for entry in self._entries.values():
534 entry.WriteMap(fd, indent + 1)
536 def GetEntries(self):
539 def GetContentsByPhandle(self, phandle, source_entry, required):
540 """Get the data contents of an entry specified by a phandle
542 This uses a phandle to look up a node and and find the entry
543 associated with it. Then it returns the contents of that entry.
545 The node must be a direct subnode of this section.
548 phandle: Phandle to look up (integer)
549 source_entry: Entry containing that phandle (used for error
551 required: True if the data must be present, False if it is OK to
555 data from associated entry (as a string), or None if not found
557 node = self._node.GetFdt().LookupPhandle(phandle)
559 source_entry.Raise("Cannot find node for phandle %d" % phandle)
560 entry = self.FindEntryByNode(node)
562 source_entry.Raise("Cannot find entry for node '%s'" % node.name)
563 return entry.GetData(required)
565 def LookupEntry(self, entries, sym_name, msg):
566 """Look up the entry for an ENF symbol
569 entries (dict): entries to search:
572 sym_name: Symbol name in the ELF file to look up in the format
573 _binman_<entry>_prop_<property> where <entry> is the name of
574 the entry and <property> is the property to find (e.g.
575 _binman_u_boot_prop_offset). As a special case, you can append
576 _any to <entry> to have it search for any matching entry. E.g.
577 _binman_u_boot_any_prop_offset will match entries called u-boot,
578 u-boot-img and u-boot-nodtb)
579 msg: Message to display if an error occurs
583 Entry: entry object that was found
584 str: name used to search for entries (uses '-' instead of the
585 '_' used by the symbol name)
586 str: property name the symbol refers to, e.g. 'image_pos'
589 ValueError:the symbol name cannot be decoded, e.g. does not have
592 m = re.match(r'^_binman_(\w+)_prop_(\w+)$', sym_name)
594 raise ValueError("%s: Symbol '%s' has invalid format" %
596 entry_name, prop_name = m.groups()
597 entry_name = entry_name.replace('_', '-')
598 entry = entries.get(entry_name)
600 if entry_name.endswith('-any'):
601 root = entry_name[:-4]
603 if name.startswith(root):
604 rest = name[len(root):]
605 if rest in ['', '-elf', '-img', '-nodtb']:
606 entry = entries[name]
607 return entry, entry_name, prop_name
609 def LookupSymbol(self, sym_name, optional, msg, base_addr, entries=None):
610 """Look up a symbol in an ELF file
612 Looks up a symbol in an ELF file. Only entry types which come from an
613 ELF image can be used by this function.
615 At present the only entry properties supported are:
617 image_pos - 'base_addr' is added if this is not an end-at-4gb image
621 sym_name: Symbol name in the ELF file to look up in the format
622 _binman_<entry>_prop_<property> where <entry> is the name of
623 the entry and <property> is the property to find (e.g.
624 _binman_u_boot_prop_offset). As a special case, you can append
625 _any to <entry> to have it search for any matching entry. E.g.
626 _binman_u_boot_any_prop_offset will match entries called u-boot,
627 u-boot-img and u-boot-nodtb)
628 optional: True if the symbol is optional. If False this function
629 will raise if the symbol is not found
630 msg: Message to display if an error occurs
631 base_addr: Base address of image. This is added to the returned
632 image_pos in most cases so that the returned position indicates
633 where the targetted entry/binary has actually been loaded. But
634 if end-at-4gb is used, this is not done, since the binary is
635 already assumed to be linked to the ROM position and using
636 execute-in-place (XIP).
639 Value that should be assigned to that symbol, or None if it was
640 optional and not found
643 ValueError if the symbol is invalid or not found, or references a
644 property which is not supported
647 entries = self._entries
648 entry, entry_name, prop_name = self.LookupEntry(entries, sym_name, msg)
650 err = ("%s: Entry '%s' not found in list (%s)" %
651 (msg, entry_name, ','.join(entries.keys())))
653 print('Warning: %s' % err, file=sys.stderr)
655 raise ValueError(err)
656 if prop_name == 'offset':
658 elif prop_name == 'image_pos':
659 value = entry.image_pos
660 if not self.GetImage()._end_4gb:
663 if prop_name == 'size':
666 raise ValueError("%s: No such property '%s'" % (msg, prop_name))
668 def GetRootSkipAtStart(self):
669 """Get the skip-at-start value for the top-level section
671 This is used to find out the starting offset for root section that
672 contains this section. If this is a top-level section then it returns
673 the skip-at-start offset for this section.
675 This is used to get the absolute position of section within the image.
678 Integer skip-at-start value for the root section containing this
682 return self.section.GetRootSkipAtStart()
683 return self._skip_at_start
685 def GetStartOffset(self):
686 """Get the start offset for this section
689 The first available offset in this section (typically 0)
691 return self._skip_at_start
693 def GetImageSize(self):
694 """Get the size of the image containing this section
697 Image size as an integer number of bytes, which may be None if the
698 image size is dynamic and its sections have not yet been packed
700 return self.GetImage().size
702 def FindEntryType(self, etype):
703 """Find an entry type in the section
706 etype: Entry type to find
708 entry matching that type, or None if not found
710 for entry in self._entries.values():
711 if entry.etype == etype:
715 def GetEntryContents(self, skip_entry=None):
716 """Call ObtainContents() for each entry in the section
718 The overall goal of this function is to read in any available data in
719 this entry and any subentries. This includes reading in blobs, setting
720 up objects which have predefined contents, etc.
722 Since entry types which contain entries call ObtainContents() on all
723 those entries too, the result is that ObtainContents() is called
724 recursively for the whole tree below this one.
726 Entries with subentries are generally not *themselves& processed here,
727 i.e. their ObtainContents() implementation simply obtains contents of
728 their subentries, skipping their own contents. For example, the
729 implementation here (for entry_Section) does not attempt to pack the
730 entries into a final result. That is handled later.
732 Generally, calling this results in SetContents() being called for each
733 entry, so that the 'data' and 'contents_size; properties are set, and
734 subsequent calls to GetData() will return value data.
736 Where 'allow_missing' is set, this can result in the 'missing' property
737 being set to True if there is no data. This is handled by setting the
738 data to b''. This function will still return success. Future calls to
739 GetData() for this entry will return b'', or in the case where the data
740 is faked, GetData() will return that fake data.
743 skip_entry: (single) Entry to skip, or None to process all entries
745 Note that this may set entry.absent to True if the entry is not
748 def _CheckDone(entry):
749 if entry != skip_entry:
750 if entry.ObtainContents() is False:
751 next_todo.append(entry)
754 todo = self.GetEntries().values()
755 for passnum in range(3):
756 threads = state.GetThreads()
763 with concurrent.futures.ThreadPoolExecutor(
764 max_workers=threads) as executor:
766 entry: executor.submit(_CheckDone, entry)
769 if self.GetImage().test_section_timeout:
771 done, not_done = concurrent.futures.wait(
772 future_to_data.values(), timeout=timeout)
773 # Make sure we check the result, so any exceptions are
774 # generated. Check the results in entry order, since tests
775 # may expect earlier entries to fail first.
777 job = future_to_data[entry]
780 self.Raise('Timed out obtaining contents')
787 self.Raise('Internal error: Could not complete processing of contents: remaining %s' %
791 def drop_absent(self):
792 """Drop entries which are absent"""
793 self._entries = {n: e for n, e in self._entries.items() if not e.absent}
795 def _SetEntryOffsetSize(self, name, offset, size):
796 """Set the offset and size of an entry
799 name: Entry name to update
800 offset: New offset, or None to leave alone
801 size: New size, or None to leave alone
803 entry = self._entries.get(name)
805 self._Raise("Unable to set offset/size for unknown entry '%s'" %
807 entry.SetOffsetSize(self._skip_at_start + offset if offset is not None
810 def GetEntryOffsets(self):
811 """Handle entries that want to set the offset/size of other entries
813 This calls each entry's GetOffsets() method. If it returns a list
814 of entries to update, it updates them.
816 for entry in self._entries.values():
817 offset_dict = entry.GetOffsets()
818 for name, info in offset_dict.items():
819 self._SetEntryOffsetSize(name, *info)
822 contents_size = len(self.data)
826 data = self.GetPaddedData(self.data)
828 size = tools.align(size, self.align_size)
830 if self.size and contents_size > self.size:
831 self._Raise("contents size %#x (%d) exceeds section size %#x (%d)" %
832 (contents_size, contents_size, self.size, self.size))
835 if self.size != tools.align(self.size, self.align_size):
836 self._Raise("Size %#x (%d) does not match align-size %#x (%d)" %
837 (self.size, self.size, self.align_size,
841 def ListEntries(self, entries, indent):
842 """List the files in the section"""
843 Entry.AddEntryInfo(entries, indent, self.name, self.etype, self.size,
844 self.image_pos, None, self.offset, self)
845 for entry in self._entries.values():
846 entry.ListEntries(entries, indent + 1)
848 def LoadData(self, decomp=True):
849 for entry in self._entries.values():
850 entry.LoadData(decomp)
851 data = self.ReadData(decomp)
852 self.contents_size = len(data)
853 self.ProcessContentsUpdate(data)
854 self.Detail('Loaded data')
857 """Get the image containing this section
859 Note that a top-level section is actually an Image, so this function may
863 Image object containing this section
867 return self.section.GetImage()
870 """Check if the entries in this section will be sorted
873 True if to be sorted, False if entries will be left in the order
874 they appear in the device tree
878 def ReadData(self, decomp=True, alt_format=None):
879 tout.info("ReadData path='%s'" % self.GetPath())
880 parent_data = self.section.ReadData(True, alt_format)
881 offset = self.offset - self.section._skip_at_start
882 data = parent_data[offset:offset + self.size]
884 '%s: Reading data from offset %#x-%#x (real %#x), size %#x, got %#x' %
885 (self.GetPath(), self.offset, self.offset + self.size, offset,
886 self.size, len(data)))
889 def ReadChildData(self, child, decomp=True, alt_format=None):
890 tout.debug(f"ReadChildData for child '{child.GetPath()}'")
891 parent_data = self.ReadData(True, alt_format)
892 offset = child.offset - self._skip_at_start
893 tout.debug("Extract for child '%s': offset %#x, skip_at_start %#x, result %#x" %
894 (child.GetPath(), child.offset, self._skip_at_start, offset))
895 data = parent_data[offset:offset + child.size]
898 data = child.DecompressData(indata)
899 if child.uncomp_size:
900 tout.info("%s: Decompressing data size %#x with algo '%s' to data size %#x" %
901 (child.GetPath(), len(indata), child.compress,
904 new_data = child.GetAltFormat(data, alt_format)
905 if new_data is not None:
909 def WriteData(self, data, decomp=True):
910 ok = super().WriteData(data, decomp)
912 # The section contents are now fixed and cannot be rebuilt from the
913 # containing entries.
914 self.mark_build_done()
917 def WriteChildData(self, child):
918 return super().WriteChildData(child)
920 def SetAllowMissing(self, allow_missing):
921 """Set whether a section allows missing external blobs
924 allow_missing: True if allowed, False if not allowed
926 self.allow_missing = allow_missing
927 for entry in self.GetEntries().values():
928 entry.SetAllowMissing(allow_missing)
930 def SetAllowFakeBlob(self, allow_fake):
931 """Set whether a section allows to create a fake blob
934 allow_fake: True if allowed, False if not allowed
936 super().SetAllowFakeBlob(allow_fake)
937 for entry in self.GetEntries().values():
938 entry.SetAllowFakeBlob(allow_fake)
940 def CheckMissing(self, missing_list):
941 """Check if any entries in this section have missing external blobs
943 If there are missing (non-optional) blobs, the entries are added to the
947 missing_list: List of Entry objects to be added to
949 for entry in self.GetEntries().values():
950 entry.CheckMissing(missing_list)
952 def CheckFakedBlobs(self, faked_blobs_list):
953 """Check if any entries in this section have faked external blobs
955 If there are faked blobs, the entries are added to the list
958 faked_blobs_list: List of Entry objects to be added to
960 for entry in self.GetEntries().values():
961 entry.CheckFakedBlobs(faked_blobs_list)
963 def CheckOptional(self, optional_list):
964 """Check the section for missing but optional external blobs
966 If there are missing (optional) blobs, the entries are added to the list
969 optional_list (list): List of Entry objects to be added to
971 for entry in self.GetEntries().values():
972 entry.CheckOptional(optional_list)
974 def check_missing_bintools(self, missing_list):
975 """Check if any entries in this section have missing bintools
977 If there are missing bintools, these are added to the list
980 missing_list: List of Bintool objects to be added to
982 super().check_missing_bintools(missing_list)
983 for entry in self.GetEntries().values():
984 entry.check_missing_bintools(missing_list)
986 def _CollectEntries(self, entries, entries_by_name, add_entry):
987 """Collect all the entries in an section
989 This builds up a dict of entries in this section and all subsections.
990 Entries are indexed by path and by name.
992 Since all paths are unique, entries will not have any conflicts. However
993 entries_by_name make have conflicts if two entries have the same name
994 (e.g. with different parent sections). In this case, an entry at a
995 higher level in the hierarchy will win over a lower-level entry.
998 entries: dict to put entries:
1001 entries_by_name: dict to put entries
1004 add_entry: Entry to add
1006 entries[add_entry.GetPath()] = add_entry
1007 to_add = add_entry.GetEntries()
1009 for entry in to_add.values():
1010 entries[entry.GetPath()] = entry
1011 for entry in to_add.values():
1012 self._CollectEntries(entries, entries_by_name, entry)
1013 entries_by_name[add_entry.name] = add_entry
1015 def MissingArgs(self, entry, missing):
1016 """Report a missing argument, if enabled
1018 For entries which require arguments, this reports an error if some are
1019 missing. If missing entries are being ignored (e.g. because we read the
1020 entry from an image rather than creating it), this function does
1024 entry (Entry): Entry to raise the error on
1025 missing (list of str): List of missing properties / entry args, each
1028 if not self._ignore_missing:
1029 missing = ', '.join(missing)
1030 entry.Raise(f'Missing required properties/entry args: {missing}')
1032 def CheckAltFormats(self, alt_formats):
1033 for entry in self.GetEntries().values():
1034 entry.CheckAltFormats(alt_formats)
1036 def AddBintools(self, btools):
1037 super().AddBintools(btools)
1038 for entry in self.GetEntries().values():
1039 entry.AddBintools(btools)
1041 def read_elf_segments(self):
1042 entries = self.GetEntries()
1044 # If the section only has one entry, see if it can provide ELF segments
1045 if len(entries) == 1:
1046 for entry in entries.values():
1047 return entry.read_elf_segments()