Here is how I solved my issue -
1. On the root folder, right click, select properties, select security tab.
2. Click Advanced
3. The window that pops up should show the current owner. Click "Change." The "Select User or Group" pop up box appears.
4. In the text box, type in your current user name (you should be an
admin), and hit the "Check Names" button. Your name will be replaced
with the "official" system name (i.e.,preceded by computer name and back
slash).
5. Click OK
6. The box disappears, sending you back to the Advanced Security Settings window.
7. A new tick box appears below owner, with the text "Replace owner on subcontainers and objects" Select this tick box.
8. I also selected the tick box "Replace all child object permission
entries with inheritable permission entries from this object", but I
think I had to run this twice - once with this box ticked and once
without.
9. Make sure that "everyone" appears on the Permission entries window,
if not, click add. The same box that appears in step 4 shows up again.
Type everyone, click "Check Names" and then "OK"
9. Hit apply
I also see that the folder ownership can be changed to "everyone" - I
haven't tried changing all my folder owners to this, but doing so might
eliminate this issue altogether if the drive is moved to a different
computer.
Hope this helps someone, this problem and lack of documentation was quite aggravating.
Belajar_IT
it's better to keep low profile,show your high profile only to "educate" the arrogant that the world isn't small enough..
Translate
Selasa, 29 Desember 2015
Senin, 07 Desember 2015
Diagnostic policy service could not start on local computer. access denied
getting error "windows could not start the diagnostic policy service service on local computer. error access is denied.
First, let’s check for the status of Diagnostic Policy Service and make sure that it’s set to Automatic.
1. Click Start, type services.msc in the Start search box and hit Enter.
2. Locate the Diagnostic Policy Service.
3. Right-click the service and select Restart.
4. Now, right-click the Diagnostic Policy Service and select Properties.
5. Under General tab, make sure the Startup type is set to Automatic.
You may now check for the dependency services for Smart Card and make sure that the dependencies status is also set to Automatic.
To check for the dependency service, right-click Smart Card and select Properties and click the Dependencies tab.
Make sure the dependency service is started and set to Automatic.
You may also try doing a System Restore to a point when the issue didn’t exist and check if that fixes the issue.
OR,
go to start ,search and type:- cmd, right click on returned cmd.exe and select "run as administrator" at the prompt type:-
net localgroup Administrators /add networkservice
press enter then type:
net localgroup Administrators /add localservice
press enter then type:
exit
press enter and restart your computer
Open services and make sure the service is started.
First, let’s check for the status of Diagnostic Policy Service and make sure that it’s set to Automatic.
1. Click Start, type services.msc in the Start search box and hit Enter.
2. Locate the Diagnostic Policy Service.
3. Right-click the service and select Restart.
4. Now, right-click the Diagnostic Policy Service and select Properties.
5. Under General tab, make sure the Startup type is set to Automatic.
You may now check for the dependency services for Smart Card and make sure that the dependencies status is also set to Automatic.
To check for the dependency service, right-click Smart Card and select Properties and click the Dependencies tab.
Make sure the dependency service is started and set to Automatic.
You may also try doing a System Restore to a point when the issue didn’t exist and check if that fixes the issue.
OR,
go to start ,search and type:- cmd, right click on returned cmd.exe and select "run as administrator" at the prompt type:-
net localgroup Administrators /add networkservice
press enter then type:
net localgroup Administrators /add localservice
press enter then type:
exit
press enter and restart your computer
Open services and make sure the service is started.
Senin, 26 Oktober 2015
Serial Connections Link Speed And Bandwidth
R F Design
Feb 15, 2012
The need for processing speed and rapid access to
large amounts of data has made serial architectures the dominant
interconnect technologies for military and aerospace systems.
Three things designers can never get enough of are computing power,
data storage, and interconnect bandwidth. The latter of these becomes
more important as computing solutions continue to incorporate multicore
processor architectures. While parallel bus architectures like VME and
ISA bus are not going away, they cannot deliver the speed and bandwidth
necessary for high-end applications found in many military and aerospace
systems—such as the performance levels provided by the 3U VPX backplane
found in the ESP-A8161 1/2 ATR box (see figure) from Elma Bustronic.
High-speed interconnect technologies are packet-oriented, point-to-point, and mostly serial interfaces, and often scalable to enhanced support by increasing the number of lanes. The quintessential example is PCI Express. The standard defines x1, x2, x4, x8, x16, and x32. The x32 links are not in use and x2 links are starting to show up in some chipsets. The x1, x4, x8, and x16 are quite common. Multiple devices are connected via a switch.
PCI Express (PCIe) is the de facto standard for host-to-peripheral interfaces. Processor chips and chip sets that have other high speed serial interfaces like Ethernet and SATA will typically have one or more PCIe interfaces as well. PCIe is also used to connect a host to adapter chips for other high speed interfaces including Ethernet, SATA, SAS, Serial RapidIO, InfiniBand, and Fibre Channel, as well as USB, HDMI, and DisplayPort.
The second-generation PCIe (Gen 2) standard runs at 5 Gb/s and employs an 8-b/10-b encoding scheme; it is backwards compatible, as are the latest PCIe Gen 3 and future Gen 4 standards.
CIe Gen 3 offers state-of-the-art right performance at present with a lane running at 8 Gb/s. It switches to a 128-b/ 130-b encoding scheme to essentially double the throughput of the Gen 2 standard when combined with the 60% increase in lane speed.
While PCIe Gen 4 will run lanes at 16 Gb/s, it is not planned for release until 2015; Gen 3 adoption has not been as fast as that for Gen 2. PCIe Gen 3 includes a number of enhancements such as atomic operations, dynamic power allocation, and multicast support. Gen 4 is likely to include a more flexible approach to implementation, allowing developers to trade off distance, bandwidth, and power. PCIe is already used for box-to-box connections that were not possible with older parallel PCI interfaces.
Storage interfaces, especially SAS RAID controllers, are connected via PCIe. SATA and Serial Attached SCSI (SAS) grew from parallel ATA and SCSI roots. The slow migration from parallel to serial storage interfaces is essentially complete with 6-Gb/s SATA being the norm for consumer and embedded drives. These days, the drive could as easily be a flash drive instead of a hard disk drive. SATA and SAS unified the serial interface and connectors, so it is usually possible to plug a SATA drive into a SAS controller (although the reverse is not true).
The current crop of SAS drives also runs at 6 Gb/s, but 12 Gb/s SAS is now emerging in the enterprise. These will also be valuable in embedded applications that require higher throughput storage. SATA and SAS controllers often deliver higher throughput than a single drive is capable of when supporting multiple drives.
Solid state disks (SSD) have pushed the limits of SATA and SAS. Linking storage directly via PCIe allows designers to take advantage of PCIe’s higher bandwidth. Proprietary PCIe/flash solutions like those from Fusion-io delivered bandwidths in excess of what SATA or SAS could provide. Likewise, the number of transactions rose significantly.
Two new standards based on PCIe are Non-Volatile Memory Express (NVMe) and SCSI Express. NVMe targets board-level interfaces with flash storage on-board. This is also true for SCSI Express, although what is behind the interface could also be a SATA or SAS drive. This could be an advantage for SCSI Express that could provide a single interface for board- and drive-based storage.
Fibre Channel is a high-speed serial interface that is primarily found in enterprise systems. It supports a number of topologies including point-to-point, loop, and switched fabrics. Copper interfaces can run at 2, 4, and 8 Gb/s. Optical signal transport is also in the mix; for example, 16-Gb/s Fibre Channel (16GFC) is an optical interface. Hard disk drives with native 4-Gb/s Fibre Channel interfaces are still available. FICON (Fibre Channel Connection) is a mainframe interconnect that operates over fiber runs to 20 km.
For completeness, some storage protocols support a variety of general-use applications, such as iSCSI and Fibre Channel over Ethernet (FcoE). Storage protocols tend to map almost directly to physical storage devices, such as magnetic and solid-state drives. iSCSI is popular in the virtualized cloud environment. Not surprisingly, iSCSI over InfiniBand is part of the mix via iSER (iSCSI RDMA).
Ethernet is one of the more versatile interfaces, carrying just about everything (including storage protocols). A wide range of Ethernet speeds may be found on a network, from 10 Mb/s to 100 Gb/s (100G). The dominant deployment at present is 1-Gb/s (1G) Ethernet, while 10/100 Ethernet is commonly supported on single-chip microprocessors. The high end of Ethernet in common use is 10G Ethernet, with 40G Ethernet quickly gaining in popularity. The fastest Ethernet speeds, 100 Gb/s, are often found in enterprise and high-end embedded systems. The big difference at 40 and 100 Gb/s is that optical fiber is the cabling standard, with copper in the works for short runs that is useful for the backplane. Optical support has been available for all Ethernet speeds, but copper cabling dominates at low and midrange speeds.
High-speed serial backplanes, like those found on VPX systems, support a range of interconnects, though Ethernet tends to be a dominant player. Many of these backplanes sometimes include Ethernet as a secondary network. In these cases, Serial RapidIO and InfiniBand are the primary players. Ethernet provides a best-effort delivery, whereas Serial RapidIO, InfiniBand, and PCIe guarantee delivery at the hardware level. Features like low latency, flow control, packet size, and low protocol overhead also distinguish these alternatives from Ethernet.
Serial RapidIO handles packets up to 256 B. It provides a switched peer-to-peer network. It has a scaled lane approach like PCIe with x1, x2, x4, x8, and x16 configurations. Lane speeds include 1.25, 2.5, 3.125, 5.0, and 6.25 GHz. The future 10xN specifications define a speed of 10 GHz/lane with the technology scaling to 25 GHz.
Serial RapidIO interfaces can be found on a number of digital-signal-processing (DSP) and central-processing-unit (CPU) chips. Serial RapidIO has been popular in embedded applications such as multifunction phased-array radar systems where large amounts of sensor information is being passed to computational cores. Its real-time performance and scalability also makes Serial RapidIO ideal for embedded applications. The x86 family of CPUs were rarely applied in Serial RapidIO systems, but this is changing with the availability of the PCIe/Serial RapidIO bridge chip from IDT and low-power, 64-b x86 processors.
InfiniBand has a maximum 4 kB packet size. It supports x1, x4, x8, and x12 lanes. Common speeds include 2.5-Gb/s Single Data Rate (SDR), 5-Gb/s Double Data Rate (DDR), and 10-Gb/s Quad Data Rate (QDR) InfiniBand which use 8-b/10-b encoding. The 14-Gb/s Fourteen Data Rate (FDR) version of InfiniBand uses 64-b/66-b encoding. The 26-Gb/s Enhanced Data Rate (EDR) version of InfiniBand is targeted for future high-speed applications.
InfiniBand was initially developed with storage and supercomputing applications in mind; a majority of the top supercomputers employ it. One feature that developers can take advantage of is Remote Direct Memory Access (RDMA). It is a useful function this InfiniBand feature can be done over Ethernet using RoCE (RDMA over Converged Ethernet). Of course, the overhead is significantly higher with Ethernet.
It is also possible to tunnel Ethernet over InfiniBand using Ethernet over InfiniBand (EoIB). There is also IP over InfiniBand (IPoIB). There is even a standard for Fibre Channel over InfiniBand (FCoIB). These technologies allow an InfiniBand-only fabric to link nodes directly to other networks. Mellanox’s SwitchX chips actually provide this in silicon IC form, handling Ethernet, InfiniBand, and Fibre Channel links.
All of the these high speed serial technologies are found in the latest military and avionic platforms. Some, like Fibre Channel, are specialized enough to be found in very specific applications. In this case, it would be enterprise platforms or embedded applications requiring lots of storage bandwidth.
Lastly, it should be noted that there are a number of other high-speed serial interconnects that were not addressed above. These include display interconnects like HDMI and DisplayPort and peripheral interconnects like Universal Serial Bus (USB). SuperSpeed USB 3.0 runs at 5 Gb/s. It is full duplex and point-to-point but it is restricted to a single lane.
The nice feature of all these high-speed interconnects is that the wiring technology is identical or very similar. VITA’s VXS and VPX standards use the same connectors for all the wired implementations. Point-to-point connections for networks like Ethernet, Serial RapidIO, and InfiniBand can be selected based on design requirements.
High-speed interconnect technologies are packet-oriented, point-to-point, and mostly serial interfaces, and often scalable to enhanced support by increasing the number of lanes. The quintessential example is PCI Express. The standard defines x1, x2, x4, x8, x16, and x32. The x32 links are not in use and x2 links are starting to show up in some chipsets. The x1, x4, x8, and x16 are quite common. Multiple devices are connected via a switch.
PCI Express (PCIe) is the de facto standard for host-to-peripheral interfaces. Processor chips and chip sets that have other high speed serial interfaces like Ethernet and SATA will typically have one or more PCIe interfaces as well. PCIe is also used to connect a host to adapter chips for other high speed interfaces including Ethernet, SATA, SAS, Serial RapidIO, InfiniBand, and Fibre Channel, as well as USB, HDMI, and DisplayPort.
The second-generation PCIe (Gen 2) standard runs at 5 Gb/s and employs an 8-b/10-b encoding scheme; it is backwards compatible, as are the latest PCIe Gen 3 and future Gen 4 standards.
CIe Gen 3 offers state-of-the-art right performance at present with a lane running at 8 Gb/s. It switches to a 128-b/ 130-b encoding scheme to essentially double the throughput of the Gen 2 standard when combined with the 60% increase in lane speed.
While PCIe Gen 4 will run lanes at 16 Gb/s, it is not planned for release until 2015; Gen 3 adoption has not been as fast as that for Gen 2. PCIe Gen 3 includes a number of enhancements such as atomic operations, dynamic power allocation, and multicast support. Gen 4 is likely to include a more flexible approach to implementation, allowing developers to trade off distance, bandwidth, and power. PCIe is already used for box-to-box connections that were not possible with older parallel PCI interfaces.
Storage interfaces, especially SAS RAID controllers, are connected via PCIe. SATA and Serial Attached SCSI (SAS) grew from parallel ATA and SCSI roots. The slow migration from parallel to serial storage interfaces is essentially complete with 6-Gb/s SATA being the norm for consumer and embedded drives. These days, the drive could as easily be a flash drive instead of a hard disk drive. SATA and SAS unified the serial interface and connectors, so it is usually possible to plug a SATA drive into a SAS controller (although the reverse is not true).
The current crop of SAS drives also runs at 6 Gb/s, but 12 Gb/s SAS is now emerging in the enterprise. These will also be valuable in embedded applications that require higher throughput storage. SATA and SAS controllers often deliver higher throughput than a single drive is capable of when supporting multiple drives.
Solid state disks (SSD) have pushed the limits of SATA and SAS. Linking storage directly via PCIe allows designers to take advantage of PCIe’s higher bandwidth. Proprietary PCIe/flash solutions like those from Fusion-io delivered bandwidths in excess of what SATA or SAS could provide. Likewise, the number of transactions rose significantly.
Two new standards based on PCIe are Non-Volatile Memory Express (NVMe) and SCSI Express. NVMe targets board-level interfaces with flash storage on-board. This is also true for SCSI Express, although what is behind the interface could also be a SATA or SAS drive. This could be an advantage for SCSI Express that could provide a single interface for board- and drive-based storage.
Fibre Channel is a high-speed serial interface that is primarily found in enterprise systems. It supports a number of topologies including point-to-point, loop, and switched fabrics. Copper interfaces can run at 2, 4, and 8 Gb/s. Optical signal transport is also in the mix; for example, 16-Gb/s Fibre Channel (16GFC) is an optical interface. Hard disk drives with native 4-Gb/s Fibre Channel interfaces are still available. FICON (Fibre Channel Connection) is a mainframe interconnect that operates over fiber runs to 20 km.
For completeness, some storage protocols support a variety of general-use applications, such as iSCSI and Fibre Channel over Ethernet (FcoE). Storage protocols tend to map almost directly to physical storage devices, such as magnetic and solid-state drives. iSCSI is popular in the virtualized cloud environment. Not surprisingly, iSCSI over InfiniBand is part of the mix via iSER (iSCSI RDMA).
Ethernet is one of the more versatile interfaces, carrying just about everything (including storage protocols). A wide range of Ethernet speeds may be found on a network, from 10 Mb/s to 100 Gb/s (100G). The dominant deployment at present is 1-Gb/s (1G) Ethernet, while 10/100 Ethernet is commonly supported on single-chip microprocessors. The high end of Ethernet in common use is 10G Ethernet, with 40G Ethernet quickly gaining in popularity. The fastest Ethernet speeds, 100 Gb/s, are often found in enterprise and high-end embedded systems. The big difference at 40 and 100 Gb/s is that optical fiber is the cabling standard, with copper in the works for short runs that is useful for the backplane. Optical support has been available for all Ethernet speeds, but copper cabling dominates at low and midrange speeds.
High-speed serial backplanes, like those found on VPX systems, support a range of interconnects, though Ethernet tends to be a dominant player. Many of these backplanes sometimes include Ethernet as a secondary network. In these cases, Serial RapidIO and InfiniBand are the primary players. Ethernet provides a best-effort delivery, whereas Serial RapidIO, InfiniBand, and PCIe guarantee delivery at the hardware level. Features like low latency, flow control, packet size, and low protocol overhead also distinguish these alternatives from Ethernet.
Serial RapidIO handles packets up to 256 B. It provides a switched peer-to-peer network. It has a scaled lane approach like PCIe with x1, x2, x4, x8, and x16 configurations. Lane speeds include 1.25, 2.5, 3.125, 5.0, and 6.25 GHz. The future 10xN specifications define a speed of 10 GHz/lane with the technology scaling to 25 GHz.
Serial RapidIO interfaces can be found on a number of digital-signal-processing (DSP) and central-processing-unit (CPU) chips. Serial RapidIO has been popular in embedded applications such as multifunction phased-array radar systems where large amounts of sensor information is being passed to computational cores. Its real-time performance and scalability also makes Serial RapidIO ideal for embedded applications. The x86 family of CPUs were rarely applied in Serial RapidIO systems, but this is changing with the availability of the PCIe/Serial RapidIO bridge chip from IDT and low-power, 64-b x86 processors.
InfiniBand has a maximum 4 kB packet size. It supports x1, x4, x8, and x12 lanes. Common speeds include 2.5-Gb/s Single Data Rate (SDR), 5-Gb/s Double Data Rate (DDR), and 10-Gb/s Quad Data Rate (QDR) InfiniBand which use 8-b/10-b encoding. The 14-Gb/s Fourteen Data Rate (FDR) version of InfiniBand uses 64-b/66-b encoding. The 26-Gb/s Enhanced Data Rate (EDR) version of InfiniBand is targeted for future high-speed applications.
InfiniBand was initially developed with storage and supercomputing applications in mind; a majority of the top supercomputers employ it. One feature that developers can take advantage of is Remote Direct Memory Access (RDMA). It is a useful function this InfiniBand feature can be done over Ethernet using RoCE (RDMA over Converged Ethernet). Of course, the overhead is significantly higher with Ethernet.
It is also possible to tunnel Ethernet over InfiniBand using Ethernet over InfiniBand (EoIB). There is also IP over InfiniBand (IPoIB). There is even a standard for Fibre Channel over InfiniBand (FCoIB). These technologies allow an InfiniBand-only fabric to link nodes directly to other networks. Mellanox’s SwitchX chips actually provide this in silicon IC form, handling Ethernet, InfiniBand, and Fibre Channel links.
All of the these high speed serial technologies are found in the latest military and avionic platforms. Some, like Fibre Channel, are specialized enough to be found in very specific applications. In this case, it would be enterprise platforms or embedded applications requiring lots of storage bandwidth.
Lastly, it should be noted that there are a number of other high-speed serial interconnects that were not addressed above. These include display interconnects like HDMI and DisplayPort and peripheral interconnects like Universal Serial Bus (USB). SuperSpeed USB 3.0 runs at 5 Gb/s. It is full duplex and point-to-point but it is restricted to a single lane.
The nice feature of all these high-speed interconnects is that the wiring technology is identical or very similar. VITA’s VXS and VPX standards use the same connectors for all the wired implementations. Point-to-point connections for networks like Ethernet, Serial RapidIO, and InfiniBand can be selected based on design requirements.
Kamis, 28 Mei 2015
BEDA FAT, FAT32, NTFS, EXT2, EXT3, DAN EXT4
PERBEDAAN FAT, FAT32, NTFS, EXT2, EXT3, DAN EXT4
Sistem Operasi Microsoft Windows sampai saat ini mempunyai tiga file system:
1. FAT 16 (File Allocation Table 16)
Sebenarnya sebelum FAT16, telebih dahulu sistem file di MS-DOS FAT12, tapi karena banyak kekurangan makanya muncul FAT16, FAT16 sendiri sudah dikenalkan oleh MS-DOS pada tahun 1981. Awalnya, sistem ini didesain umtuk mengatur file fi floppy disk, dan sudah mengalami beberapa kali perubahan, sehingga digunakan untuk mengatur file harddisk. Keuntungan FAT16 adalah kompatibel hampir di semua sistem operasi, baik Windows 95/98/ME, OS/2, Linux dan bahkan Unix. Namun dibalik itu semua masalah paling besar dari FAT16 adalah mempunyai kapasitas tetap jumlah cluster dalam partisi, jadi semakin besar harddisk, maka ukuran cluster akan semakin besar. selain itu kekurangan FAT16 salah satunya tidak mendukung kompresi, enkripsi dan kontrol akses dalam partisi
2. FAT 32 (File Allocation Table 32)
FAT32 mulai di kenal pada sistim Windows 95 SP2, dan merupakan pengembangan lebih dari FAT16. FAT32 menawarkan kemampuan menampung jumlat cluster yang lebih besar dalam partisi. Selain itu juga mengembangkan kemampuan harddisk menjadi lebih baik dibanding FAT16. Namun FAT32 memiliki kelemahan yang tidak di miliki FAT16 yaitu terbatasnya Operating System yang bisa mengenal FAT32. Tidak seperti FAT16 yang bisa di kenal oleh hampir semua system operasi, namun itu bukan masalah apabila anda menjalankan FAT32 di Windows XP karena Windows XP tidak peduli file sistim apa yang di gunakan pada partisi.
3. NTFS (New Technology File System)
NTFS di kenalkan pertama pada Windows NT dan merupakan file system yang benar benar berbeda di banding teknologi FAT. NTFS menawarkan security yang jauh lebih baik, kompresi file, cluster dan bahkan support enkripsi data. NTFS merupakan file system standar untuk Windows Xp dan apabila anda melakukan upgrade Windows biasa anda akan di tanyakan apakah ingin mengupgrade ke NTFS atau tetap menggunakan FAT. Namun jika anda sudah melakukan upgrade pada Windows Xp dan tidak melakukan perubahan NTFS itu bukan masalah karena anda bisa mengkonversinya ke NTFS kapanpun. Namun ingat bahwa apabila anda sudah menggunakan NTFS akan muncul masalah jika ingin downgrade ke FAT tanpa kehilangan data.
Pada Umumnya NTFS tidak kompatibel dengan Operating System lain yang terinstall di komputer yang sama (Double OS) bahkan juga tidak terdeteksi apabila anda melakukan startup-boot menggunakan floopy. Untuk itu sangat disa-rankan kepada anda untuk menyediakan partisi yang kecil saja yang menggunakan file system FAT di awal partisi. Partisi ini dapat anda gunakan untuk menyimpan Recovery Tool apabila mendapat masalah.
Sedangkan GNU/Linux mempunyai beberapa file system:
1. Ext 2 (2rd Extented)
EXT2 adalah file sistem yang ampuh di linux. EXT2 juga merupakan salah satu file sistem yang paling ampuh dan menjadi dasar dari segala distribusi linux. Pada EXT2 file sistem, file data disimpan sebagai data blok. Data blok ini mempunyai panjang yang sama dan meskipun panjangnya bervariasi diantara EXT2 file sistem, besar blok tersebut ditentukan pada saat file sistem dibuat dengan perintah mk2fs. Jika besar blok adalah 1024 bytes, maka file dengan besar 1025 bytes akan memakai 2 blok. Ini berarti kita membuang setengah blok per file.
EXT2 mendefinisikan topologi file sistem dengan memberikan arti bahwa setiap file pada sistem diasosiasiakan dengan struktur data inode. Sebuah inode menunjukkan blok mana dalam suatu file tentang hak akses setiap file, waktu modifikasi file, dan tipe file. Setiap file dalam EXT2 file sistem terdiri dari inode tunggal dan setiap inode mempunyai nomor identifikasi yang unik. Inode-inode file sistem disimpan dalam tabel inode. Direktori dalam EXT2 file sistem adalah file khusus yang mengandung pointer ke inode masing-masing isi direktori tersebut.
2. Ext 3 (3rd Extended)
EXT3 adalah peningkatan dari EXT2 file sistem. Peningkatan ini memiliki beberapa keuntungan, diantaranya:
a.Setelah kegagalan sumber daya, “unclean shutdown”, atau kerusakan sistem, EXT2 file sistem harus melalui proses pengecekan dengan program e2fsck. Proses ini dapat membuang waktu sehingga proses booting menjadi sangat lama, khususnya untuk disk besar yang mengandung banyak sekali data. Dalam proses ini, semua data tidak dapat diakses.
Jurnal yang disediakan oleh EXT3 menyebabkan tidak perlu lagi dilakukan pengecekan data setelah kegagalan sistem. EXT3 hanya dicek bila ada kerusakan hardware seperti kerusakan hard disk, tetapi kejadian ini sangat jarang. Waktu yang diperlukan EXT3 file sistem setelah terjadi “unclean shutdown” tidak tergantung dari ukuran file sistem atau banyaknya file, tetapi tergantung dari besarnya jurnal yang digunakan untuk menjaga konsistensi. Besar jurnal default memerlukan waktu kira-kira sedetik untuk pulih, tergantung kecepatan hardware.
b.Integritas data
EXT3 menjamin adanya integritas data setelah terjadi kerusakan atau “unclean shutdown”. EXT3 memungkinkan kita memilih jenis dan tipe proteksi dari data.
c.Kecepatan
Daripada menulis data lebih dari sekali, EXT3 mempunyai throughput yang lebih besar daripada EXT2 karena EXT3 memaksimalkan pergerakan head hard disk. Kita bisa memilih tiga jurnal mode untuk memaksimalkan kecepatan, tetapi integritas data tidak terjamin.
d.Mudah dilakukan migrasi
Kita dapat berpindah dari EXT2 ke sistem EXT3 tanpa melakukan format ulang.
3. Ext 4 (4rd Extended)
Ext4 dirilis secara komplit dan stabil berawal dari kernel 2.6.28 jadi apabila distro anda yang secara default memiliki versi kernel tersebuat atau di atas nya otomatis system anda sudah support ext4 (dengan catatan sudah di include kedalam kernelnya) selain itu versi e2fsprogs harus mengunakan versi 1.41.5 atau lebih.
Apabila anda masih menggunakan fs ext3 dapat mengkonversi ke ext4 dengan beberapa langkah yang tidak terlalu rumit.
Keuntungan yang bisa didapat dengan mengupgrade filesystem ke ext4 dibanding ext3 adalah mempunyai pengalamatan 48-bit block yang artinya dia akan mempunyai 1EB = 1,048,576 TB ukuran maksimum filesystem dengan 16 TB untuk maksimum file size nya,Fast fsck,Journal checksumming,Defragmentation support.
Senin, 11 Mei 2015
The source file name(s) are larger than is supported by the file system
How to Delete Files which exceed 255 Characters Without 3rd Party Tools
Windows Explorer and many Windows applications including PowerShell are
limited to 255 characters max file path. Whilst this limitation is in
place at an application level, the NTFS file system does not support
this limit. In fact file paths can be created remotely over the SMB
protocol to exceed this limit which is how most file servers get stuck
with folder paths administrators can no longer maintain using the native
Windows Explorer application.
When attempting to delete folders using Windows Explorer the following errors may be experienced:
The source file name(s) are larger than is supported by the file system. Try moving to a location which has a shorter path name, or renaming to shorter name(s) before attempting this operation.
I am going to show you a way to remove excessively long file paths without using third party tools such as Long Path Tool which come at a price or booting into different operating systems such as Linux to remove the unwanted file paths.
One Microsoft application which is not limited to the 255 character limit is robocopy.exe. I know this as I often move large volumes of data with Robocopy between server infrastructure and have never been hit with a file path limitation. As a result, this is the tool I chose to remove the data.
If you use robocopy with the /MIR switch, it will make the destination folder exactly the same as the source folder. So if the source folder is empty, it will delete all data in the destination empty and in result deleting the content.
I have a path here with 3 users which have folder structures which exceed 255 characters. Windows Explorer failed to remove these folders.
I created an empty folder on C:\ called test then used the mirror switch to copy the test folder to the HomeDrives folder.
robocopy /MIR c:\test E:\UserData\HomeDrives
After running the command all my user folders under E:\UserData\HomeDrives were deleted.
This is a handy trick for dealing with folders on file servers which have excessive amounts of long folder structures which exceed the 255 character limit.
Hope this has been helpful, feel free to leave me a comment below.
When attempting to delete folders using Windows Explorer the following errors may be experienced:
The source file name(s) are larger than is supported by the file system. Try moving to a location which has a shorter path name, or renaming to shorter name(s) before attempting this operation.
An unexpected error is keeping you from
deleting the folder. If you continue to receive this error, you can use
the error code to search for help with this problem.
Error: 0x80004005: Unspecified error
Even new applications from Microsoft such as PowerShell do not support
file paths longer then 255 characters despite this being supported by
NTFS.
Remove-Item: The specified path, file name, or
both are too long. The fully qualified file name must be less than 260
characters, and the directory name must be less then 248 characters.
I am going to show you a way to remove excessively long file paths without using third party tools such as Long Path Tool which come at a price or booting into different operating systems such as Linux to remove the unwanted file paths.
One Microsoft application which is not limited to the 255 character limit is robocopy.exe. I know this as I often move large volumes of data with Robocopy between server infrastructure and have never been hit with a file path limitation. As a result, this is the tool I chose to remove the data.
If you use robocopy with the /MIR switch, it will make the destination folder exactly the same as the source folder. So if the source folder is empty, it will delete all data in the destination empty and in result deleting the content.
I have a path here with 3 users which have folder structures which exceed 255 characters. Windows Explorer failed to remove these folders.
I created an empty folder on C:\ called test then used the mirror switch to copy the test folder to the HomeDrives folder.
robocopy /MIR c:\test E:\UserData\HomeDrives
After running the command all my user folders under E:\UserData\HomeDrives were deleted.
This is a handy trick for dealing with folders on file servers which have excessive amounts of long folder structures which exceed the 255 character limit.
Hope this has been helpful, feel free to leave me a comment below.
Senin, 08 Desember 2014
how to disabled ipv6 windows
Starting in Windows Vista and Server 2008, Microsoft includes
native support for IPv6 (Internet Protocol Version 6) and is enabled by
default. IPv6 is the new computer address protocol that will eventually
replace IPv4 which is currently the most popular standard. Unless you
network has a specific requirement for IPv6, very few do, you can safely
disable IPv6. Unlike other protocols, you cannot disable IPv6 by
disabling the protocol on each of your network interfaces. While that
will disable the protocol for the interfaces the loopback and tunnel
interfaces will still have it enabled that can cause problems with
applications. The proper way to disable IPv6 is to disable via the
registry.
First, click on the Start Button and type in regedit and hit Enter.
Then, navigate through HKEY_LOCAL_MACHINE, SYSTEM, CurrentControlSet, services, TCPIP6 and Parameters. Right click on Parameters and select New and then DWORD (32-bit) Value. Name the new value DisabledComponents and hit Enter. Now right click on the new DisabledComponents value you just created and select Modify. Set the value of DisabledComponents to FFFFFFFF and click OK.
After a reboot IPv6 will be disabled on all interfaces.
First, click on the Start Button and type in regedit and hit Enter.
Then, navigate through HKEY_LOCAL_MACHINE, SYSTEM, CurrentControlSet, services, TCPIP6 and Parameters. Right click on Parameters and select New and then DWORD (32-bit) Value. Name the new value DisabledComponents and hit Enter. Now right click on the new DisabledComponents value you just created and select Modify. Set the value of DisabledComponents to FFFFFFFF and click OK.
Selasa, 25 November 2014
Microsoft Outlook Error- An unknown error occurred, error code: 0×80070003
Microsoft Outlook Error- An unknown error occurred, error code: 0×80070003
This issue is caused from changing or moving the location of your my documents folder. This applies to Microsoft Outlook 2007 / 2010
The notes below relate to OS Windows Vista / Windows 7
Step 1 -
Delete your outlook profile in Control Panel\Mail. This will not delete emails etc only settings, so if you don’t know them of heart i suggest making some note first
Step 3 -
Then run REGEDIT
Navigate to – HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Outlook\
Delete all files in these folders
\PST\
\Search\
\Search\Catalog
Step 4 -
Rename this folder "outlook" in your profile change to "outlook1"
C:\Users\”username”\AppData\Local\Microsoft\"outlook"
Step 5 -
Restart PC
Open Outlook setting email and add Files
Langganan:
Komentar (Atom)